Skip to main content
There are two currently offered methods for connecting to your Trainy cluster after on-boarding.

Teleport

Identity Providers

Teleport supports the following authentication providers
  • Google workspaces (Available)
  • Okta (Contact us for setup)
  • Github (Contact us for setup)
  • Auth0 (Contact us for setup)
  • and more!
To begin, users should first have the tsh CLI installed on the machine that they want to access their cluster from by following these instructions.

Authentication example: Google

To authenticate, use tsh login . The following example uses Google as the identity provider but the identity provider can be changed by simply changing the --auth flag value. Upon running tsh login a browser window should open prompting the user to complete the auth flow, as shown below.
tsh login  \
	[email protected] \
	--proxy=trainy.teleport.sh:443 \
	--auth=google
Google Auth Pn after completing the auth flow, you’ll be presented with a page confirming if your authentication is successful. Teleport Auth Success Pn Afterwards you can list which clusters you have access to.
$ tsh kube ls
Kube Cluster Name Labels                                               Selected 
----------------- ---------------------------------------------------- -------- 
my-cluster         tenant=myawesomeco       
To configure a short-lived kubeconfig to start accessing one of your clusters
# get kubeconfig first
$ tsh kube login my-cluster
Logged into Kubernetes cluster "my-cluster". Try 'kubectl version' to test the connection.

# then test connection
$ kubectl get version
Client Version: v1.31.6-dispatcher
Kustomize Version: v5.4.2
Server Version: v1.33.1

User isolation via namespace per user

Every user after authentication will have full access to the namespace default and a dedicated namespace that only they have access to based on their username they used to authenticate. For example if authenticated as [email protected] , you can perform administrative actions in the trainy-myawesomeco-myusername namespace. We recommend organizations that have strict isolation requirements especially regarding isolation of secrets/credentials between users manage jobs in their dedeicated namespace.
# create your dedicated namespace
kubectl create ns trainy-myawesomeco-myusername

# set your context 
kubectl config set-context --current --namespace=trainy-myawesomeco-myusername

# start interacting with konduktor
konduktor staus
Currently managing (listing/creating/deleting) has hard isolation between users with namespace isolation. That means another user cannot see another users job or delete it for them if the job is not launched in the shared access namespace default . We are working to create an admin role for creating/deleting jobs as well as increased user scopes for simply listing other users jobs to understand cluster capacity.

Tailscale

Your Trainy admin will create a shareable link that users can use to access their Trainy cluster. Afterwards users can use tailscale status to check their cluster status, and tailscale configure kubeconfig my-cluster to connect to the cluster.
# check if cluster is available
$ tailscale status
100.11.111.11   my-macbook-air-1 my-macbook-air-1.taila1111c.ts.net macOS   -
100.11.111.121  my-cluster tagged-devices linux   -

# get credentials to connect to cluster
$ tailscale configure kubeconfig my-cluster

# test connection
$ kubectl version
Client Version: v1.31.6-dispatcher
Kustomize Version: v5.4.2
Server Version: v1.33.1