Categories
Personal

LinkedIn’s Hidden Jobs

Did you know about LinkedIn’s Hidden Jobs? Linkedin will hide job postings after a number of people have applied if the job poster doesn’t pay a large sum of money per day for it to be searchable?

Free Job Post

I’ve never posted a job on LinkedIn before so I was very confused by what they meant by posting a “free job post.”

Image of Post a free Job

With a free job, up to 10 applicants can apply after that the job is “paused.”

Paused Jobs – LinkedIn’s Hidden Jobs

Good News:

  • Applicants can still apply to the job
  • Its still technically free

Bad News:

  • The Job is no longer searchable on the jobs page. Meaning if you are searching for jobs that doesn’t have 600 applicants you are S.O.L.
Paused Job
Paused Job
  • After 50 or so people apply for the free job LinkedIn will automatically CLOSE THE APPLICATION TO NEW APPLICANTS.
Closed Job Application
Closed Job Application

Unpausing Jobs

If you want to unpause the job then its no problem. You just need to pay up to $85 a dollars a day so that your job is searchable.

Pay Linkedin lots of money to promote a job.
Pay up or be hidden

Now as you can imagine this can become very expensive. Lets do a quick example example.

Lets say that you need to fulfill 15 position and you want them searchable for atleast two weeks

15 positions * 14 days * $85 =

That is a lot of money to spend in two weeks.

Why Should you care?

If you are an employer:

You should care because this can be very expensive

If you are looking for a job:

You should care because a majority of the jobs you can apply to aren’t easily searchable. The only why you can find a “paused” job is if somebody shares the job link with you or you were to go on the companies Linkedin page and looked under their jobs section.

Its impractical to go to every single company and search their jobs page the manually. So many of these jobs would go unnoticed by many qualified job seekers.

Conclusion

Back in march of 2024 when I was looking for a new position if I had known about the hidden jobs I would have built a crawler to find paused jobs and apply to them if they matched what I was looking for haha.

LinkedIn would never release these stats but I’d be interested in seeing how many jobs exist that are still accepting applications but are paused.

Good luck to those looking for jobs.

Good luck to employers who have to shell out large amounts of money to find those job seekers.

Categories
gcp

Keda Pub/Sub Scaler

Table of Contents

Keda Pub/Sub Scaler was an unnecessary challenge I had to face over the course of 3 days. If you were to cross reference these 3 sources:

  • https://cloud.google.com/kubernetes-engine/docs/tutorials/scale-to-zero-using-keda#setup-env
  • https://keda.sh/docs/2.10/scalers/gcp-pub-sub/
  • https://keda.sh/docs/2.14/authentication-providers/gcp-workload-identity/

You can come to a reasonable idea of what you need to do. As long as you read it thoroughly…

Or you can see a working example here 😀

TLDR;

Go to the full example

Assumptions

  • Workload Identity is turned on for you cluster
  • Your node pool has “GKE Metadata Server” enabled
  • Your GCP user has the permissions to create a workload identity for a Kubernetes Service Account
  • You’re using helm to install keda

Getting Started

To get Keda working you first need to get Custom Metrics Stackdriver Adapter working.

Please see my article on GCP Horizontal Pod Autoscaling with Pub/Sub to learn how to set that up.

Configuring Keda

After getting the custom metrics stackdriver adapter working now its time to install keda

helm install --repo  https://kedacore.github.io/charts --version 2.16.0 keda keda -n keda

Create a bash script to add a policy binding to the keda-operator KSA and call the script add-policy.sh.

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
KEDA_NAMESPACE=keda

gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
      --role=roles/monitoring.viewer \
      --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$KEDA_NAMESPACE/sa/keda-operator \
      --condition=None
  echo "Added workload identity to keda-operator"

That is all you need to do to enable Keda Pub/Sub Scaler. Continue on if you want a full example


Full Example

This script will

  • install keda via helm
  • add policy binding for the keda-operator and pubsub-sa KSA’s
  • create a topic/subscription
  • Deploys an app that reads from the pub/sub subscription
  • Creates Keda TriggerAuthentication and ScaledObject objects
PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
SERVICE_ACCOUNT_NAME=custom-metrics-stackdriver
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
KEDA_NAMESPACE=keda
APP_NAMESPACE=default
PUBSUB_TOPIC=echo
PUBSUB_SUBSCRIPTION=echo-read
YAML_FILE_NAME=test-app.yaml

create(){
  helm install keda kedacore/keda -n keda

  sleep 3

  gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
      --role=roles/monitoring.viewer \
      --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$KEDA_NAMESPACE/sa/keda-operator \
      --condition=None
  echo "Added workload identity to keda-operator"

  gcloud pubsub topics create $PUBSUB_TOPIC
  sleep 5
  echo "Created $PUBSUB_TOPIC Topic"

  gcloud pubsub subscriptions create $PUBSUB_SUBSCRIPTION --topic=$PUBSUB_TOPIC
  echo "Created Subscription $PUBSUB_SUBSCRIPTION to Topic $PUBSUB_TOPIC"

  gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
      --role=roles/pubsub.subscriber \
      --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$APP_NAMESPACE/sa/pubsub-sa
    echo "Added workload identity to to pubsub-sa"

cat > $YAML_FILE_NAME << EOL
apiVersion: v1
kind: ServiceAccount
metadata:
  name: pubsub-sa
---
# [START gke_deployment_pubsub_with_workflow_identity_deployment_pubsub]
# [START container_pubsub_workload_identity_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pubsub
spec:
  selector:
    matchLabels:
      app: pubsub
  template:
    metadata:
      labels:
        app: pubsub
    spec:
      serviceAccountName: pubsub-sa
      containers:
        - name: subscriber
          image: us-docker.pkg.dev/google-samples/containers/gke/pubsub-sample:v2
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
  name: keda-trigger-auth-gcp-credentials
spec:
  podIdentity:
    provider: gcp
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: pubsub-scaledobject
spec:
  scaleTargetRef:
    name: pubsub #Deployment
  minReplicaCount: 1
  maxReplicaCount: 2
  triggers:
    - type: gcp-pubsub
      authenticationRef:
        name: keda-trigger-auth-gcp-credentials
      metadata:
        subscriptionName: "echo-read" # Required
        value: "5"
        activationValue: "5"
#        credentialsFromEnv: GOOGLE_APPLICATION_CREDENTIALS_JSON
# [END container_pubsub_workload_identity_deployment]
# [END gke_deployment_pubsub_with_workflow_identity_deployment_pubsub]

EOL

  kubectl apply -f $YAML_FILE_NAME -n $APP_NAMESPACE
  echo "Deployed test application"
}


delete(){
  kubectl delete -f $YAML_FILE_NAME -n $APP_NAMESPACE
  gcloud projects remove-iam-policy-binding projects/$PROJECT_ID \
        --role=roles/pubsub.subscriber \
        --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$APP_NAMESPACE/sa/pubsub-sa \
        --condition=None
  gcloud pubsub subscriptions delete $PUBSUB_SUBSCRIPTION
  gcloud pubsub topics delete $PUBSUB_TOPIC
  gcloud projects remove-iam-policy-binding projects/$PROJECT_ID \
        --role=roles/monitoring.viewer \
        --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$KEDA_NAMESPACE/sa/keda-operator \
        --condition=None
  helm uninstall keda -n keda
}

create

In another window send messages to the topic

$ for i in {1..200}; do gcloud pubsub topics publish echo --message="Autoscaling #${i}"; done

Its going to take a few minutes for the scaling to occur.

Watch as the pod count goes up. Eventually you’ll see the targets start to go up

$ watch kubectl get hpa -n default

NAME                           REFERENCE           TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
keda-hpa-pubsub-scaledobject   Deployment/pubsub   2/5 (avg)   1         2         2          10m

TroubleShooting

  • You see the dreaded <unkown>/5 error.
    • This can happen for a variety of reasons. Its best to check the output of all the commands and make sure they all work. If any of them failed then the hpa setup will fail.

Categories
argocd

Manage Multiple Kubernetes Clusters with Argocd

Table of Contents

Manage Multiple Kubernetes Clusters with Argocd

Argocd makes it easy to manage multiple kubernetes clusters with a single a single instance of Argocd. Lets get to it

Assumptions

  • You have a remote cluster you already want to manage.
  • You are using GKE
    • This guide can still help you. Just make sure the argocd-server and the argocd-application-controller service accounts have admin permissions to the remote cluster.
  • You are using helm to manage argocd.
    • If not then dang that must be rough.
  • You have the ability to create service accounts with container admin permissions.
    • Or argocd-server and the argocd-application-controller service accounts have admin permissions to the remote cluster.

IAM Shenanigans

We need to

  • Create a service account with the container.admin role.
  • Bind the iam.workloadIdentityUser role to the kubernetes Service accounts argocd-server & argocd-application-controller so that it can impersonate the service account that will be created.

Here’s a simple script to do just that. Call it create-gsa.sh.

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
SERVICE_ACCOUNT_NAME=argo-cd-01
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")


gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME \
  --description="custom metrics stackdriver" \
  --display-name="custom-metrics-stackdriver"
echo "Created google service account(GSA) $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com"

sleep 5 #Sleep is because iam policy binding fails sometimes if its used to soon after service account creation


gcloud projects add-iam-policy-binding $PROJECT_ID \
 --role roles/container.admin	 \
 --member serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
echo "added role monitoring.viewer to GSA $SERVICE_ACCOUNT_NAME@$PROJECT_ID.m.gserviceaccount.com"

# Needed so KSA can impersonate GSA account
gcloud iam service-accounts add-iam-policy-binding  \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:$PROJECT_ID.svc.id.goog[argocd/argocd-server]" \
  $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
echo "added iam policy for KSA serviceAccount:$PROJECT_ID.svc.id.goog[argocd/argocd-server]"

# Needed so KSA can impersonate GSA account
gcloud iam service-accounts add-iam-policy-binding  \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:$PROJECT_ID.svc.id.goog[argocd/argocd-application-controller]" \
  $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
echo "added iam policy for KSA serviceAccount:$PROJECT_ID.svc.id.goog[argocd/argocd-application-controller]"

Get IP & Certificate Authority of the Remote K8s Clusters

Get Public IP and Unencoded Cluster Certificate

In the console

  • Go to the cluster details
  • Look under the Control Plane Networking section the public endpoint and the text “Show cluster certificate.”
  • Press the “Show cluster certificate” button to get the certificate.

Base64 Encode Cluster Certificate

  • Copy the certificate to a file called cc.txt
  • Run the base64 command to encode the certificate
    • Be sure to copy everything including the BEGIN/END CERTIFICATE
base64 cc.txt -w 0 && echo ""

Create Argocd Helm Chart Values File

Add the base64 encode cluster certificate and public IP to the CLUSTER_CERT_BASE64_ENCODED & CLUSTER_IP respectively.

Create a bash script create-yaml.sh and execute

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
SERVICE_ACCOUNT_NAME=argo-cd-01
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
CLUSTER_CERT_BASE64_ENCODED=""
CLUSTER_IP="" # Example 35.44.34.111. DO NOT INCLUDE "https://"

cat > values.yaml <<EOL
configs:
  clusterCredentials:
    remote-cluster:
      server:  https://${CLUSTER_IP}
      config:
        {
          "execProviderConfig": {
            "command": "argocd-k8s-auth",
            "args": [ "gcp" ],
            "apiVersion": "client.authentication.k8s.io/v1beta1"
          },
          "tlsClientConfig": {
            "insecure": false,
            "caData": "${CLUSTER_CERT_BASE64_ENCODED}"
          }
        }
  rbac:
    ##################################
    # Assign admin roles to users
    ##################################
    policy.default: role:readonly  # ***** Allows you to view everything without logging in.
    policy.csv: |
      g, myAdmin, role:admin
  ##################################
  # Assign permission login and to create api keys for  users
  ##################################
  cm:
    accounts.myAdmin: apiKey, login
    users.anonymous.enabled: true
  params:
    server.insecure: true #communication between services is via http

  ##################################
  #  Assigning the password to the users. Argo-cd uses bycypt.
  #  To generate a new password use https://bcrypt.online/ to generate a new password and add it here.
  ##################################
  secret:
    extra:
      accounts.myAdmin.password: \$2y\$10\$p5knGMvbVSSBzvbeM1tLne2rYBW.4L6aJqN.Fp1AalKe3qh3LuBq6 #fancy_password
      accounts.myAdmin.passwordMtime: 1970-10-08T17:45:10Z


controller:
  serviceAccount:
    annotations:
      iam.gke.io/gcp-service-account: ${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com

server:
  serviceAccount:
    annotations:
      iam.gke.io/gcp-service-account: ${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
  service:
    type: LoadBalancer


EOL

Run Helm Install/Upgrade

helm install --repo  https://argoproj.github.io/argo-helm --version 7.6.7 argocd argo-cd -f values.yaml 

If you run helm upgrade make sure you delete the argocd-server and argocd-application-controller pods to make sure the the service account changes took effect.

Confirm everything is working

You can create your own application on the remote server or can run this script to create one. Create a bash script called apply-application.sh and execute it.


YAML_FILE_NAME="guestbook-application.yaml"

cat > $YAML_FILE_NAME << EOL
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  destination:
    namespace: guestbook
    name:  remote-cluster #Name of the remote cluster
  project: default
  source:
    path: helm-guestbook
    repoURL: https://github.com/argoproj/argocd-example-apps # Check to make sure this still exists
    targetRevision: HEAD
  syncPolicy:
    automated:
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

EOL

kubectl apply -f $YAML_FILE_NAME

The Application should have successfully been automatically synced and healthy.

Troubleshooting

  • If you did a helm upgrade instead of a helm install then you may want to delete the argocd-server and argocd-application-controller pods to make sure the the service account changes took effect.

Categories
kubernetes

GCP Horizontal Pod Autoscaling with Pub/Sub

Table of Contents

Google Just Why?

GCP Horizontal Pod Autoscaling with Pub/Sub shouldn’t be as complicated as it is. I’m not sure why but following this GCP article it appears workload identity doesn’t work with the stack driver.

I instead did it the “old” way of using Google Service Accounts instead.

Assumptions

  • You already have a k8s cluster running.
  • You have kubectl installed and you are authenticated into your cluster
  • You have admin permissions with GKE to do the following
    • Create pub/sub topics & subscriptions
    • Create service accounts
    • Admin permissions inside of your k8s cluster
  • You already have workload identity turned on for BOTH you cluster and node pool
Cluster with workload identity for GCP Horizontal Pod Autoscaling with Pub/Sub article
Cluster with workload identity
Node Page with GKE Metadata Server enabled for GCP Horizontal Pod Autoscaling with Pub/Sub article
Node Page with GKE Metadata Server enabled

If all the assumptions are true then your ready to run the script below. If not follow this guide GCP guide up until the “Deploying the Custom Metrics Adapter.”

Lets Get Down to HPA

First create a manifest file for a application and call the file test-app.yaml

This manifest will be called by the script below so make sure its in the working directory when you execute the script

apiVersion: v1
kind: ServiceAccount
metadata:
  name: pubsub-sa
---
# [START gke_deployment_pubsub_with_workflow_identity_deployment_pubsub]
# [START container_pubsub_workload_identity_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pubsub
spec:
  selector:
    matchLabels:
      app: pubsub
  template:
    metadata:
      labels:
        app: pubsub
    spec:
      serviceAccountName: pubsub-sa
      containers:
        - name: subscriber
          image: us-docker.pkg.dev/google-samples/containers/gke/pubsub-sample:v2
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: pubsub
spec:
  minReplicas: 1
  maxReplicas: 4
  metrics:
    - external:
        metric:
          name: pubsub.googleapis.com|subscription|num_undelivered_messages
          selector:
            matchLabels:
              resource.labels.subscription_id: echo-read
        target:
          type: AverageValue
          averageValue: 2
      type: External
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: pubsub
# [END container_pubsub_workload_identity_deployment]
# [END gke_deployment_pubsub_with_workflow_identity_deployment_pubsub]

You can find the container code here
https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/main/databases/cloud-pubsub/main.py


import datetime
import time

# [START gke_pubsub_pull]
# [START container_pubsub_pull]
from google import auth
from google.cloud import pubsub_v1


def main():
    """Continuously pull messages from subsciption"""

    # read default project ID
    _, project_id = auth.default()
    subscription_id = 'echo-read'

    subscriber = pubsub_v1.SubscriberClient()
    subscription_path = subscriber.subscription_path(
        project_id, subscription_id)

    def callback(message: pubsub_v1.subscriber.message.Message) -> None:
        """Process received message"""
        print(f"Received message: ID={message.message_id} Data={message.data}")
        print(f"[{datetime.datetime.now()}] Processing: {message.message_id}")
        time.sleep(3)
        print(f"[{datetime.datetime.now()}] Processed: {message.message_id}")
        message.ack()

    streaming_pull_future = subscriber.subscribe(
        subscription_path, callback=callback)
    print(f"Pulling messages from {subscription_path}...")

    with subscriber:
        try:
            streaming_pull_future.result()
        except Exception as e:
            print(e)
# [END container_pubsub_pull]
# [END gke_pubsub_pull]


if __name__ == '__main__':
    main()

Next create bash script called run-example.sh

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
SERVICE_ACCOUNT_NAME=custom-metrics-stackdriver
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
EXAMPLE_NAMESPACE=default
PUBSUB_TOPIC=echo
PUBSUB_SUBSCRIPTION=echo-read

create (){

  kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml
  sleep 5
  kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml
  # running twice to make sure its being created
  echo "Created custom-metrics namespace and additional resources"

  gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME \
    --description="custom metrics stackdriver" \
    --display-name="custom-metrics-stackdriver"
  echo "Created google service account(GSA) $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com"
  
  sleep 5 #Sleep is because iam policy binding fails sometimes if its used to soon after service account creation

  gcloud projects add-iam-policy-binding $PROJECT_ID \
   --role roles/monitoring.viewer \
   --member serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
  echo "added role monitoring.viewer to GSA $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com"

  gcloud iam service-accounts add-iam-policy-binding  \
    --role roles/iam.workloadIdentityUser \
    --member "serviceAccount:$PROJECT_ID.svc.id.goog[custom-metrics/custom-metrics-stackdriver-adapter]" \
    $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
  echo "added iam policy for KSA custom-metrics-stackdriver-adapter"

  kubectl annotate serviceaccount --namespace custom-metrics \
    custom-metrics-stackdriver-adapter \
    iam.gke.io/gcp-service-account=$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
  echo "annotated KSA custom-metrics-stackdriver-adapter with GSA $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com"

  gcloud pubsub topics create $PUBSUB_TOPIC
  sleep 5
  echo "Created Topic"

  gcloud pubsub subscriptions create $PUBSUB_SUBSCRIPTION --topic=$PUBSUB_TOPIC
  echo "Created Subscription to Topic"


  kubectl apply -f test-app.yaml -n $EXAMPLE_NAMESPACE
  echo "Deployed test application"

  gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
    --role=roles/pubsub.subscriber \
    --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$EXAMPLE_NAMESPACE/sa/pubsub-sa
  echo "Added workload identity to to pubsub-sa"
}

delete() {
  kubectl delete -f test-app.yaml -n $EXAMPLE_NAMESPACE
  kubectl delete -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml

  echo  $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
  gcloud iam service-accounts delete $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com --quiet

  gcloud projects remove-iam-policy-binding projects/$PROJECT_ID \
      --role=roles/pubsub.subscriber \
      --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$EXAMPLE_NAMESPACE/sa/pubsub-sa

  gcloud pubsub topics delete $PUBSUB_TOPIC
  gcloud pubsub subscriptions delete $PUBSUB_SUBSCRIPTION
}

create

If you are prompted to enter a condition choose “None”

Confirm Application is Working

Make the application pod is running

$ kubectl get pods

NAME                      READY   STATUS    RESTARTS   AGE
pubsub-7f44cf5977-rbztk   1/1     Running   0          16h

Make sure the hpa is running

$ kubectl get pods
NAME     REFERENCE           TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
pubsub   Deployment/pubsub   0/2 (avg)   1         4         1          1m

Lets trigger an auto-scale event by sending messages to the echo topic.

 for i in {1..200}; do gcloud pubsub topics publish echo --message="Autoscaling #${i}";  done

It’ll take 2-5 minutes for the scaling event to occur. Yes this is slow.

After awhile you should see that the pod number has increased and that is reflected on the hpa status as well

$ kubectl get hpa

NAME     REFERENCE           TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
pubsub   Deployment/pubsub   25/2 (avg)   1         4         4          74m



$ kubectl get pods

NAME                      READY   STATUS        RESTARTS         AGE
pubsub-7f44cf5977-f54hc   1/1     Running       0                25s
pubsub-7f44cf5977-gjbsh   1/1     Running       0                25s
pubsub-7f44cf5977-n7ttr   1/1     Running       0                25s
pubsub-7f44cf5977-xglct   1/1     Running       0                26s

Troubleshooting

Always check the output of run-example.sh first. Odds are you didn’t have permissions to do something. You can always run the delete command and start all over

***NOTE: you’ll need to change the name of the service account because GCP does soft deletes on service accounts.

Problems

HPA has unknown under targets.

$kubectl get hpa

NAME     REFERENCE           TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
pubsub   Deployment/pubsub   unknown/2 (avg)   1         4         4          64m
  • The reason for this is that some configuration just went wrong. Check to make sure every command executed correctly.
  • You can even check the logs from the custom-metrics pod to make sure nothing is wrong.
austin.poole@docker-and-such:~$ kubectl get pods -n custom-metrics
NAME                                                 READY   STATUS    RESTARTS   AGE
custom-metrics-stackdriver-adapter-89fdf8645-bbn4l   1/1     Running   0          5h11m
austin.poole@docker-and-such:~$ kubectl logs custom-metrics-stackdriver-adapter-89fdf8645-bbn4l -n custom-metrics
I1127 13:52:25.333064       1 adapter.go:217] serverOptions: {true true true true false   false false}
I1127 13:52:25.336266       1 adapter.go:227] ListFullCustomMetrics is disabled, which would only list 1 metric resource to reduce memory usage. Add --list-full-custom-metrics to list full metric resources for debugging.
I1127 13:52:29.127164       1 serving.go:374] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
....
  • Make sure that the external metrics APIService exists by querying the api-server.
$ kubectl proxy --port 8080 &

Starting to serve on 127.0.0.1:8080


$ curl http://localhost:8080/apis/external.metrics.k8s.io/v1beta1

{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "external.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "externalmetrics",
      "singularName": "",
      "namespaced": true,
      "kind": "ExternalMetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}

If there the external metrics APIService is missing than re-run

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml

Thanks for taking the time to read about GCP Horizontal Pod Autoscaling with Pub/Sub.

Cheers!

Categories
argocd programming

ArgoCD: Add new local accounts through helm chart?

Last Updated:

I ran into this reddit post when I was trying to create a local account via the argo-cd helm chart. I can’t comment on the post anymore but I can answer the question here

Helm Chart Version

Argo-cd helm chart version: 7.6.7

Custom Helm Values File

Create a custom values file called values.yaml

configs:
  rbac:
    policy.default: role:readonly  # ***** Allows you to view everything without logging in.
    ##################################
    # Assign admin roles to users
    ##################################
    
    policy.csv: |
      g, baylin2, role:admin
      g, joesmith, role:admin
      g, vpoole, role:admin


  ##################################
  # Assign permission login and to create api keys for  users
  ##################################
cm:
  accounts.baylin2: apiKey, login
  accounts.joesmith: apiKey, login
  accounts.vpoole: apiKey, login
  users.anonymous.enabled: true
params:
  server.insecure: true #communication between services is via http

##################################
#  Assigning the password to the users. Argo-cd uses bycypt.
#  To generate a new password use https://bcrypt.online/ to generate a new password and add it here.
##################################
secret:
  extra:
    accounts.baylin2.password: $2y$10$p5knGMvbVSSBzvbeM1tLne2rYBW.4L6aJqN.Fp1AalKe3qh3LuBq6 #fancy_password
    accounts.baylin2.passwordMtime: 2024-10-08T17:45:10Z


    accounts.joesmith.password: $2y$10$p5knGMvbVSSBzvbeM1tLne2rYBW.4L6aJqN.Fp1AalKe3qh3LuBq6 #fancy_password
    accounts.joesmith.passwordMtime: 2024-10-08T17:45:10Z


    accounts.vpoole.password: $2y$10$p5knGMvbVSSBzvbeM1tLne2rYBW.4L6aJqN.Fp1AalKe3qh3LuBq6 #fancy_password
    accounts.vpoole.passwordMtime: 2024-10-08T17:45:10Z

server:
  service:
    type: LoadBalancer

values.yaml

Install the Argocd helm Release

helm install --repo  https://argoproj.github.io/argo-helm --version 7.6.7 argocd argo-cd -f values.yaml 

Get the public ip address associated with the service or use port forwarding on the service

$kubectl get svc 
NAME                               TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
argocd-applicationset-controller   ClusterIP      10.114.227.176   <none>           7000/TCP                     3m19s
argocd-dex-server                  ClusterIP      10.114.234.168   <none>           5556/TCP,5557/TCP            3m19s
argocd-redis                       ClusterIP      10.114.235.236   <none>           6379/TCP                     3m18s
argocd-repo-server                 ClusterIP      10.114.226.23    <none>           8081/TCP                     3m19s
argocd-server                      LoadBalancer   10.114.234.103   35.196.1.1   80:30333/TCP,443:30713/TCP   3m1

or

$kubectl port-forward svc/argocd-server --address 0.0.0.0 8080:80

Now access the account page via loadbalancer or port forwarding

Port Forwarding:

Loadbalance ip:

  • local accounts page with users defined in argocd

    Try logging in with one of the users you defined for example

    username: vpoole

    password: fancy_password

    argocd login page with username/password visible