Tutorial

Learn how to use ksonnet

Tutorial: Intro to ksonnet

Throughout this tutorial, you’ll see italicized text next to an expander icon [+]. You can click the [+] to get more context about the topic in question.

Expand all [+] in this tutorial.

Overview

This tutorial assumes no prior knowledge of ksonnet. You don’t need expertise with Kubernetes, but it will be helpful to have seen the command kubectl apply, which is used to deploy applications onto Kubernetes clusters.

What we’ll build

In this tutorial, we’ll walk through the steps of using ksonnet to configure and run a basic web app on your cluster. This app is based on the classic Kubernetes guestbook example, a form for submitting and searching through simple messages. When deployed, your guestbook will look like the following:

screenshot of Guestbook app

Along the way, you’ll see the most common ksonnet workflows in action, learn about best practices, and understand how ksonnet concepts tie together to streamline the process of writing Kubernetes manifests.

Additional context

If you have any of the following questions, click the corresponding [+] to learn more:

  • What do you mean by a manifest? [+]

    If you’ve ever written a YAML or JSON file and used it in a command like kubectl apply, you’ve written a manifest.

    More specifically, to run code ON a Kubernetes cluster, you use a manifest to declare which API resources are needed. Such resources might include a Deployment or NetworkPolicy.

    Note that ksonnet is concerned with your manifests, not your source code.

  • Why not YAML or JSON manifests? [+]

    YAML and JSON have their obvious pros: YAML is very human-readable, and JSON is machine-readable. Both are straightforward and interchangeable with each other.

    However, you may have also noticed that YAML/JSON make manifests:

    • Tedious to write, especially against the official Kubernetes API
    • Hard to refactor and manage at scale, even though this is feasible with kubectl patch

    ksonnet uses the Jsonnet language to address these sorts of problems.

  • What is Jsonnet, and why does ksonnet use it for manifests? [+]

    You don’t need to know Jsonnet syntax for this tutorial, but you should be aware that it powers ksonnet under the hood.

    For simplicity, think of Jsonnet as a fancier superset of JSON, one that supports features like variables and object concatenation. Jsonnet’s syntax allows us to be more concise—avoiding the copy-and-paste problem of manifests written in pure JSON or YAML. It also allows for features like parameter customization.

    You can see this in Jsonnet -> YAML conversion below:

    side-by-side screenshot of Jsonnet code and its YAML equivalent

    If you’d like more info, you can see the official Jsonnet documentation.

  • What sort of tool is ksonnet? [+]

    ksonnet is a framework–an opinionated way to manage your Kubernetes manifests. This framework is supported by two parts:

    • A CLI (command line interface): A lot of the ksonnet “magic” comes from the file structure it autogenerates for you. While you can always modify these files directly, ksonnet’s ks command is a more user-friendly way of carrying out common operations.

    • A library: ksonnet wraps up your version of the Kubernetes API into a library (k.libsonnet) that you can use when writing your manifests. If you use a text editor like VSCode, this allows you to take advantage of tooling like autocomplete. NOTE: You do not need to install ksonnet-lib! It is included by default when you use the CLI to create a new application.
  • How is this tutorial different from the “Tour of ksonnet”? [+]

    Think of the Tour of ksonnet as a supplement to the tutorial. This tutorial explains “what” certain ks commands are and “why” you’d want to use them, while the interactive tour focuses more on the mechanics of “how” they work. It takes a look under the hood, at specific lines of ksonnet files.

If you have outstanding questions that remain unanswered by the end of this tutorial, help us improve by raising a documentation issue.

Now, let’s get started!

0. Prerequisites

Before we begin, ensure that:

  • Your environment variable $KUBECONFIG should specify a valid kubeconfig file, which points at the cluster you want to use for this demonstration. [+]

    If you’re ever kubectl-ed into a cluster, you have a kubeconfig file somewhere locally that allowed you to make that connection. (It often lives inside a directory like $HOME/.kube/). (Still confused about kubeconfigs?)

  • Your cluster should have kube-dns running, which the application you’ll be building depends on. [+]

    How do you know if kube-dns is running? Try running

      kubectl get pods -n kube-system | grep kube-dns
    

    If this returns empty, you’ll want to set up kube-dns by following these instructions.

1. Initialize your app

In this section, we’ll be using the ksonnet CLI to set up your application.

Define “application”

First off, what exactly do we mean by a ksonnet application? Think of an application as a well-structured directory of Kubernetes manifests, which typically tie together in some way.

In this case, our app manifests collectively define the following architecture:

diagram of Guestbook architecture

Our UI, datastore, search service, and logging stack are each going to be defined by a separate manifest. Note that this tutorial only covers the UI and datastore for your app. A future tutorial will address the search service and logging stack.

(Does a ksonnet application have be some sort of web app?) [+]

Not at all! Generally, the manifests in a ksonnet app do produce a single overarching output like the Guestbook. However, this is not a strict rule—apps are really just a method of organization. It is up to you, the developer, to determine what should be grouped together.

Commands

Now let’s run some commands:

  1. First, create a “sandbox” namespace on your Kubernetes cluster that we can use for this tutorial.

    It looks like there are a lot of commands but don’t worry! They’re meant to be copy and pasted, and as cluster-agnostic as possible.

    (Why are we doing this?) [+]

    Since we’ll be deploying this Guestbook app to your Kubernetes cluster, we want to ensure that the app is:

    1. Isolated from pre-existing resources on your cluster (e.g. to avoid naming collisions)
    2. Easily cleaned up at the end of this tutorial

    We’ve done this here by setting up a dedicated Kubernetes namespace, and a corresponding context that points at that namespace. This context is saved in your active kubeconfig file, and is used by ksonnet in the next step.

     kubectl create namespace ks-dev
    
     CURRENT_CONTEXT=$(kubectl config current-context)
     CURRENT_CLUSTER=$(kubectl config get-contexts $CURRENT_CONTEXT | tail -1 | awk '{print $3}')
     CURRENT_USER=$(kubectl config get-contexts $CURRENT_CONTEXT | tail -1 | awk '{print $4}')
    
     kubectl config set-context ks-dev \
       --namespace ks-dev \
       --cluster $CURRENT_CLUSTER \
       --user $CURRENT_USER
    
  2. Initialize your app, using the ks-dev context that we created in step (1).

    If you are running Kubernetes 1.8, you will also need to append --api-spec=version:v1.8.0 to the end of the following command:

     ks init guestbook --context ks-dev  
    

    (What’s happening here?) [+]

    Quite a bit, actually!

    Command Syntax

    First, let’s start with that --context flag. In general, when you set up a new app, ksonnet pulls cluster info from your active kubeconfig file. This makes it really easy for ksonnet to know which cluster to deploy your manifests to later on.

    Here, we’re being even more specific by naming a context from our kubeconfig (one that is tied to the ks-dev namespace, as you might recall). Later in this tutorial, we’ll cover this idea more formally.

    Command Output

    This command creates a new guestbook/ directory to contain all of your app-specific manifests. It also autogenerates a bunch of files and subdirectories, which we’ll look at next.

  3. See your results:

     cd guestbook
    

    (What’s inside?) [+]

    If you examine the file structure inside your directory, you’ll see something like the following:

       .
       ├── app.yaml
       ├── components                      // *What* is deployed to your cluster
       │   └── params.libsonnet
       ├── environments                    // *Where* your app is deployed
       │   ├── base.libsonnet
       │   └── default
       │       ├── main.jsonnet
       │       ├── params.libsonnet
       │       └── spec.json
       ├── lib                             // *Helper code* specific to your app
       └── vendor                          // *External libraries* that your app leverages
     
  4. Check your ksonnet app into version control:

     git init
     git add .
     git commit -m "initialize guestbook app"
    

    (Why is this neat?) [+]

    The idea is to treat “configuration as code”. If we describe as much as possible about your configuration as version-controllable files, it is easier to capture the entire desired state of your Kubernetes resources. This also allows you to incorporate a regular review-approve-merge workflow for any configuration changes, and to reference or roll back changes if necessary.

Key takeaways

The structure of a ksonnet app is very important. Not only is it more modular than the standard “pile of YAML”, it is responsible for the ksonnet magic. In other words, this structure allows the ksonnet CLI to make assumptions about the app and thereby automate certain workflows.

2. Generate and deploy an app component

Now that we have a working directory for your app, let’s start adding manifests that we can deploy! These manifests define the following components of your app:

  • A UI (AngularJS/PHP) - the webpage that your user interacts with
  • A basic datastore (Redis) - where user messages are stored

This process is mostly automated by the ksonnet CLI. Any boilerplate YAML will be autogenerated, so you can avoid all that copying and pasting.

Define “component”

We’ve alluded to this a bit before, but any set of discrete components can be combined to make a ksonnet app:

diagram of components

(Can you please be more precise than that?) [+]

Components can be as simple as a single Kubernetes resource (e.g. a Deployment) or as complex as a complete logging stack (e.g. EFK). More concretely, a component corresponds to a Kubernetes manifest in components/. You can add this manifest in two ways:

  • Typically, autogenerate it with the ks generate command. (Jsonnet manifest)
  • Alternatively, you can manually drop in a file. (YAML, JSON, or Jsonnet manifests)

The manual approach supports all three languages, which allows you to introduce ksonnet to existing codebases without significant rewrites. Because we are dealing with a new ksonnet app, we’ll be focusing on autogeneration for this tutorial.

To iteratively add new components, we’ll use the following command pattern:

  • ks generate - Generate the manifest for a particular component
  • ks apply - Apply all available manifests to your cluster

Commands (UI component)

First we’ll begin with the Guestbook UI. Its manifest will declare two Kubernetes API resources:

  1. A Deployment to run
  2. A Service to expose it to external users’ requests.

The container image itself is written with PHP and AngularJS.

screenshot of Guestbook app

To set up the Guestbook UI component:

  1. First generate the manifest that describes the Guestbook UI:

       ks generate deployed-service guestbook-ui \
       --image gcr.io/heptio-images/ks-guestbook-demo:0.1 \
       --type ClusterIP
    

    (I have a lot of questions about what just happened.) [+]

    Command Syntax
    • deployed-service - The “pattern” we use to generate our manifest (officially, a prototype).
    • guestbook-ui - The filename for our resulting manifest (in full, components/guestbook-ui.jsonnet). This is also the metadata.name used for all Kubernetes API objects that the manifest defines.
    • --image gcr.io/heptio-images/ks-guestbook-demo:0.1 - sets the container image for our Deployment
    • --type ClusterIP - sets how our Service is exposed (as opposed to NodePort or LoadBalancer)

    The parameters from the command line flags ares specific to the prototype deployed-service, and are used to customize it. Different prototypes support different sets of parameters. If we wanted to create another component based on the configMap prototype, for instance, we’d need to specify a --data flag.

  2. View the YAML equivalent:

       ks show default
    

    (So my actual manifest file is *.jsonnet, not *.yaml?) [+]

    Yes, ksonnet autogenerates component/ manifests with Jsonnet, but you can also drop YAML or JSON files into your ksonnet app. Pure JSON can be directly integrated into Jsonnet code, and Jsonnet can be converted back into JSON or YAML.

  3. Now deploy the UI onto your cluster:

       ks apply default
    

    Note that default refers to the ks-dev context (and implicit namespace) that we used during ks init.

    (How is this different from kubectl apply?) [+]

    Let’s start with the similarities. Both kubectl apply and ks apply apply one or more manifests to your Kubernetes cluster.

    However, there are some differences:

    • By default, ks apply overrides old ksonnet manifests with new ones.

      Unless otherwise specified, it garbage collects previously created Kubernetes API resources. In contrast, kubectl apply takes a three-way-diff approach.

    • By default, ks apply applies everything in the components/ directory of your ksonnet app.

      Because components in an app usually tie together in a holistic way, it makes sense to deploy them all at once. In contrast, kubectl apply requires you to specify files with a -f flag. (If you still want this behavior, ks apply also supports the similar -c, which allows you to specify components like guestbook-ui.)

    • ks apply requires you to explicitly specify where you want to apply your manifests to.

      In this case, we are specifying ks-dev, which corresponds to the context we used with ks init at the very beginning. In contrast, kubectl apply determines this cluster info implicitly, by pulling from your currently active kubeconfig.

  4. Take a look at the live Guestbook app.

    Again, don’t worry about these commands! They expose the Guestbook service so you can access it from your browser. They are as cluster-agnostic as possible, so you can copy and paste:

    Note that you won’t be able to submit messages yet! Because we haven’t yet deployed the Redis component, clicking the buttons in your Guestbook UI will fail.

       # Set up an API proxy so that you can access the guestbook-ui service locally
       kubectl proxy > /dev/null &
       KC_PROXY_PID=$!
       SERVICE_PREFIX=http://localhost:8001/api/v1/proxy
       GUESTBOOK_URL=$SERVICE_PREFIX/namespaces/ks-dev/services/guestbook-ui
    
       # Check out the guestbook app in your browser
       open $GUESTBOOK_URL
    
  5. Version control these changes:

     git add .
     git commit -m "autogenerate ui component"
    

Takeaways

How do we know what components are available for us to generate, and furthermore, how are they generated?

Components are based off of common manifest patterns, which are called prototypes because they make it really easy to prototype a new component on your cluster with minimal effort. You just saw the deployed-service prototype, which comes with ksonnet out-of-the-box.

If you review what we’ve just done, we only really needed steps (1) ks generate and (3) ks apply to get the Guestbook UI up and running on your cluster. Not bad! But we can do even better—you might be familiar with existing kubectl commands like run and expose that seem pretty similar. When we deploy a prototype that is more specialized than a Service and Deployment combo (Redis!), the advantages of ksonnet commands will make more sense.

3. Understand how prototypes build components

Define “prototype”

Before we figure out how to get Redis working, let’s take a moment to formalize our understanding of prototypes. In addition to general combinations of Kubernetes API objects like deployed-service, prototypes can also define common off-the-shelf components like databases.

We’ll actually be using the redis-stateless prototype next, which sets up a basic Redis instance (stateless because it is not backed by persistent volumes). More complex prototypes need to be downloaded because they do not come out-of-the-box; in this section, we’ll show you how to do so.

By itself, a prototype is an incomplete, skeleton manifest, written in Jsonnet. During ks generate, you can specify certain command-line parameters to “fill-in-the-blanks” of a prototype and output a component:

diagram of prototype to component process

(Why is this useful?) [+]

Oftentimes, when we’re setting up a popular component like Redis—there’s only so much of Redis’s configuration that is actually specific to your app. Say that 80% of the manifest is general configuration that needs to be there for any Redis deployment, and 20% is custom. It seems a bit unnecessary to have to copy and paste in that 80%, every time you need Redis. Prototypes allow you to skip the boilerplate, and focus on the parts of your configuration that are specific to your app.

You’ll see this process in action a few more times, as we set up the rest of the Guestbook app.

Commands (Datastore component)

Now let’s use the redis-stateless prototype to generate the datastore component of our app, as depicted below:

screenshot of Guestbook app

We’ll need to do a little of extra package management first, the redis-stateless prototype is not available by default.

  1. Start by seeing what prototypes we have available out of the box:

     ks prototype list
    
  2. See what packages are currently available for us to download:

     ks pkg list
    

    (Where do these packages come from?) [+]

    By default ksonnet is aware of any package defined in the incubator registry, which corresponds to this Github repo.

    Currently ksonnet does not support adding other registries, but this is a planned feature.

  3. Download a specific version of the ksonnet Redis library (which contains definitions for various Redis prototypes):

     ks pkg install incubator/redis@master
    
  4. Check the updated list of packages and prototypes (you should see redis and stateless-redis):

     ks pkg list
     ks prototype list
    
  5. Figure out the parameters we need for this prototype:

     ks prototype describe redis-stateless
    
  6. At this point, we’re ready to generate the manifest for our Redis component:

     ks generate redis-stateless redis
    

    (This is familiar, right?) [+]

    Previously we did the same thing with deployed-service in order to set up our Guestbook UI. In this case, the defaults for the redis-stateless prototype suffice, so we don’t need to specify any additional parameters.

    The manifest for this Redis component is saved under components/redis.jsonnet, and all of its constituent Kubernetes API objects are deployed with the metadata.name redis.

  7. View the YAML equivalent (we’re still in our default “sandbox”):

       ks show default
    
  8. Now deploy Redis to our cluster:

       ks apply default
    

    (Is there a way to see what will happen first, without actually changing our cluster?) [+]

    Yes, you can actually run ks apply with the --dry-run flag, which outputs a summary describing which resources will be created or modified by your current manifests.

  9. Let’s check out the Guestbook page again:

     open $GUESTBOOK_URL
    

    Enter something into the main textbox (it should say “Messages” in grayed out text), and click the Submit button. Unlike before, you should now see it appear below. This should look something like the following:

    screenshot of guestbook with messages

  10. Version control these changes:

    git add .
    git commit -m "autogenerate redis component"
    

Awesome, we have the main functionality of the Guestbook working!

(Hm, but how does the Guestbook UI know how to talk to the Redis database?) [+]

Ok, we have cheated a little bit here. Right now your Guestbook UI container image is hard-coded to send requests to the redis DNS name. Because your cluster is running kube-dns, if a Kubernetes Service exists with the name redis, it is automatically routed these requests.

This works if you’ve been following our instructions exactly, but it’s pretty brittle. This implementation tightly couples your Kubernetes UI to the redis DNS name, which makes it a bit more unpleasant if we want to change the name of the datastore service later on.

As you might have guessed, there should be a more explicit “glue” that connects your Guestbook UI to your datastore. In a future tutorial, we’ll actually demonstrate how to do this with a combination of ksonnet parameters and environment variables.

Takeaways

Using ks generate and ks apply, you can use prototypes and parameters to quickly get the components of your app up and running on a Kubernetes cluster. You can use additional helper commands like ks show and ks describe to supplement the process of developing your manifests.

Full disclosure: even with parameter customization, your autogenerated manifests will not always match up perfectly with what you need. However, as the ksonnet tour demonstrates, you can leverage the flexibility of the Jsonnet language to tweak them accordingly.

(Great, but how do I keep track of all of these ks commands?) [+]

If you forget the name or usage of a ks CLI command, you can use the --help flag to see documentation in your terminal. You can run this at the top level (e.g. ks --help to see a list of all commands), or for nested commands as well (e.g. ks prototype --help for a list of all commands relevant to prototypes).

4. Set up another environment for your app

At this point, we have the basics of our Guestbook app working. Users are able to submit messages via the UI, and these are persisted in our Redis datastore.

We aren’t covering fancier features in this tutorial(like search or logging), but we’re going to show how you can use the same set of component manifests in your ksonnet application to deploy to multiple environments. In practice, you might imagine that you’d be developing your manifests in a dev environment, and vetting the results before promoting to an official prod environment.

Define “environment”

Below is a visualization of two environments that represent different namespaces on the same cluster:

Diagram of two possible environments

More formally, you can think of an environment as a combination of four elements, some of which can be pulled from your current kubeconfig context:

  1. A name — Used to identify a specific environment, and must be unique within a given ksonnet app.
  2. A server URI — The address and port of a Kubernetes API server. In other words, it identifies a unique cluster.
  3. A namespace — A specific namespace within the cluster specified by the server URI. Default is default.
  4. A Kubernetes API version — The version of Kubernetes that your API server is running. Used to generate the appropriate helper libraries from Kubernetes’s OpenAPI spec.

We’re going to set up something very similar to the diagram above (e.g. two environments on the same cluster), in order to mock the process of release management.

Commands

  1. Create a new namespace and context, both named ks-prod, for your second environment:

     kubectl create namespace ks-prod
     kubectl config set-context ks-prod \
       --namespace ks-prod \
       --cluster $CURRENT_CLUSTER \
       --user $CURRENT_USER
    
  2. Add the prod environment under the name prod, and rename the existing default environment to dev for clarity:

     ks env list
     ks env add prod --context=ks-prod
     ks env set default --name dev
     ks env list
    
  3. Apply all existing manifests (Guestbook UI and Redis) to your prod environment:

     ks apply prod
    
  4. Now you have a parallel version of Guestbook running in prod (same cluster, ks-prod namespace):

     PROD_GUESTBOOK_URL=$SERVICE_PREFIX/namespaces/ks-prod/services/guestbook-ui
    
     open $PROD_GUESTBOOK_URL
    
  5. Check your changes into version control:

     git add .
     git commit -m "add prod env"
    

    (Do we need version control if we didn’t add any new components?) [+]

    The environment changes you just made actually affect the environments/ directory! As you can see, everything—-all these manifests and their associated metadata—is tracked explicitly in local files, which means that everything can be version controlled and traced back to a commit. Configuration-as-code really does mean all configuration.

Takeaways

Environments allow you deploy a common set of manifests to different environments. If you’re wondering why you might do this, here are some potential use cases:

  • Release Management (dev vs test vs prod)
  • Multi-AZ (us-west-2 vs us-east-1)
  • Multi-cloud (AWS vs GCP vs Azure)

Environments are represented hierarchically, so if you’re dealing with many environments, you can nest them as us-west-2/dev and us-east-1/prod. As you’ll see next, this lets parameters of any specific environment override its base/parent environments in an intuitive way.

5. Customize an environment with parameters

Alright, so it’s great to be able to apply the same manifests to multiple environments—but oftentimes the whole point of distinct environments is slightly different configurations.

It’s a bit restrictive and unrealistic if our prod Guestbook has to run in exactly the same way as our dev Guestbook, so let’s start customizing our environments with parameters. Up until this point, we’ve been setting parameters during ks generate, when we pass in command line flags to customize a new component. Here we’ll show how you can change these parameters after the fact, for specific environments.

Define “parameters”

As we’ve alluded to, parameters can be set for the entire app or per-environment. In this tutorial, all the parameters you’ll see are specific to a component. A future tutorial will address the idea of global parameters, which can be shared across multiple components.

Under the hood, the ks param commands update a couple of local Jsonnet files, so that you always have a version-controllable representation of what you ks apply onto your Kubernetes cluster.

(What does this look like?) [+]

  • components/params.libsonnet - These are your app parameters.

    The contents of this file are a Jsonnet hash, with two keys:
    • global - Needs to be manually edited
    • components - Set during ks generate or via ks param set

  • environments/<env-name>/params.libsonnet - These are environment parameters, specific to env-name.

    There is only one key in this hash:
    • components - Set via ks param set, with --env <env-name>

Commands

  1. First let’s see the difference between our environments’ parameters (there should be none):

     ks param diff dev prod
    
  2. Now let’s set some environment-specific params:

     ks param set guestbook-ui image gcr.io/heptio-images/ks-guestbook-demo:0.2 --env dev
     ks param set guestbook-ui replicas 3 --env prod
    

    (What’s the story here?) [+]

    In the first line, we’re updating the container image for your dev UI. This lets you vet your code changes first, before making any similar changes to prod.

    In the second line, we’re updating the prod UI deployment to have three replicas instead of just one. You might want to do this if you expect prod to handle higher traffic than dev (e.g. more users than developers).

  3. Now let’s see if our param diff surfaces any differences:

     ks param diff dev prod   
    

    Notice that the params we’ve changed have been highlighted!

  4. Alright, now let’s deploy to our two environments (remember, same cluster):

     ks apply dev && ks apply prod
    
  5. Let’s check the difference between what’s actually running on dev and prod:

     ks diff remote:dev remote:prod
    

    (What’s the output mean?) [+]

    The output here is a standard file diff between the manifests running in your dev environment and the ones in prod. As you might notice, the diffs include more than just the parameter changes you made! This is because the Kubernetes API server autopopulates certain fields like creationTimestamp.

    If you’re sure that you’ve deployed to both environments, you can get a cleaner diff by comparing the locally generated manifests rather than the ones on your cluster server: ks diff local:dev local:prod.

    (What’s the syntax?) [+]

    Arguments to ks diff combine two values, separated by a colon:

    • Where the manifests are located (remote or local)
    • The environment name
  6. Compare the two guestbook UIs (the one for dev should look pretty different!):

     # Check out dev guestbook
     open $GUESTBOOK_URL
    
     # Make sure that the changes didn't affect prod
     open $PROD_GUESTBOOK_URL
    
  7. Once again, check your files into version control:

     git add .
     git commit -m "update guestbook-ui parameters"
    

Takeaways

With the added power of parameters, environments allow you to do more than run identical copies of your app in different clusters and namespaces. Using parameters, you can fine tune your deployment to the needs of each environment, whether that is for different load requirements or just more accurate labels.

6. Tie it together

Congrats! You’ve just developed and deployed the main components of the Guestbook using ksonnet, and you now have a sustainable set of manifests that you can continue to use if you decide to add more functionality later on.

We realize that we’ve gone over a lot, so the following diagram provides a quick overview of the key ksonnet concepts you’ve used:

high-level diagram of ksonnet

In plain English:

  1. Prototypes and parameters can combine to form components.
  2. Multiple components make up an app.
  3. An app can be deployed to multiple environments.

Cleanup

If you’d like to remove the Guestbook app and other residual traces from your cluster, run the following commands in the root of your Guestbook app directory:

# Remove your app from your cluster (everything defined in components/)
ks delete dev && ks delete prod

# If you used 'kubectl proxy' to connect to your Guestbook service, make sure
# to end that process
sudo kill -9 $KC_PROXY_PID

# Remove the "sandbox"
kubectl delete namespace ks-dev ks-prod
kubectl config delete-context ks-dev && kubectl config delete-context ks-prod

Next steps

We’ve also only just skimmed the surface of the ksonnet framework. Much of what you’ve seen has been focused on the CLI. To learn more, check out the following resources:

(What about the rest of the Guestbook (e.g. search)?) [+]

Stay tuned for a future tutorial, which delves into more advanced functionality and finishes the rest of the Guestbook, by walking you through how to:

  • Add more components (Elasticsearch search and EFK logging)
  • Modify components directly (e.g. to use env variables instead of hard-coded references!)
  • Understand what a prototype is made of (parts)
  • Examine the structure of a ksonnet library
  • Add legacy JSON manifests into your app (and refactor)

Troubleshooting

Github rate limiting errors

If you get an error saying something to the effect of 403 API rate limit of 60 still exceeded you can work around that by getting a Github personal access token and setting it up so that ks can use it. Github has higher rate limits for authenticated users than unauthenticated users.

  1. Go to https://github.com/settings/tokens and generate a new token. You don’t have to give it any access at all as you are simply authenticating.
  2. Make sure you save that token someplace because you can’t see it again. If you lose it you’ll have to delete and create a new one.
  3. Set an environment variable in your shell: export GITHUB_TOKEN=<token>. You may want to do this as part of your shell startup scripts (i.e. .profile).