Showing posts with label containers. Show all posts
Showing posts with label containers. Show all posts

Tuesday, 8 October 2024

Working on K8 Identity Management.

The configuration change is ready to be pushed into AWS - please guys let me know when you are ready for me to start pushing this into Dev cluster, then test/access the outcome.. If we are happy with it, then propose a rollout/push into PROD.


Using kubeconfig files


There are three main ways to point kubectl at your kubeconfig files:

1 - The --kubeconfig flag


You can pass a flag on every kubectl command that you run.  This flag will force kubectl to read from the kubeconfig file that you specify.  You can only use one kubeconfig file this way.  Also, you can only specify one instance of this flag on the command line.  This way is a little cumbersome as you have to type this for every kubectl command.

2 - The KUBECONFIG environment variable


You can also set a special environment variable named KUBECONFIG.  The value of this variable points at the kubeconfig file that you would like to use.  This variable can be pointed at multiple kubeconfig files, if you wish.  Just make sure to separate the files with colons (on Linux & Mac) or semi-colons (on Windows).  If you specify multiple kubeconfigs this way, then kubectl will merge them all into one config and use that merged version.

3 - The default config file


By default, the kubectl command-line tool will look for a kubeconfig file simply named config (no file extension) in the .kube directory of the user's profile:

  • Linux:
  • $HOME/.kube/config
  • Windows:
  • %USERPROFILE%\.kube\config
This is the easiest method to use, in my opinion.  Simply place a file in the correct directory, and kubectl will automatically pick it up and use it.

Some useful kubectl commands


#show the full contents of your kubeconfig file
kubectl config view

#show the value of the current-context line of your kubeconfig file
kubectl config current-context

#show all of the Users currently defined in your kubeconfig file
kubectl config get-users

#show all of the Clusters currently defined in your kubeconfig file
kubectl config get-clusters

#show all of the Contexts currently defined in your kubeconfig file
kubectl config get-contexts


kubectl Cheat Sheet

This page contains a list of commonly used kubectl commands and flags.

Kubectl autocomplete

BASH

source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
You can also use a shorthand alias for kubectl that also works with completion:

alias k=kubectl
complete -F __start_kubectl k

ZSH

source <(kubectl completion zsh) # setup autocomplete in zsh into the current shell
echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # add autocomplete permanently to your zsh shell

Kubectl context and configuration

Set which Kubernetes cluster kubectl communicates with and modifies configuration information. See Authenticating Across Clusters with kubeconfig documentation for detailed config file information.

kubectl config view # Show Merged kubeconfig settings.


# use multiple kubeconfig files at the same time and view merged config
KUBECONFIG=~/.kube/config:~/.kube/kubconfig2


kubectl config view


# get the password for the e2e user
kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'


kubectl config view -o jsonpath='{.users[].name}' # display the first user
kubectl config view -o jsonpath='{.users[*].name}' # get a list of users
kubectl config get-contexts # display list of contexts
kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name


# add a new user to your kubeconf that supports basic auth
kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword


# permanently save the namespace for all subsequent kubectl commands in that context.
kubectl config set-context --current --namespace=ggckad-s2


# set a context utilizing a specific username and namespace.
kubectl config set-context gce --user=cluster-admin --namespace=foo \
&& kubectl config use-context gce


kubectl config unset users.foo # delete user foo

Kubectl apply

Apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster by running kubectl apply. This is the recommended way of managing Kubernetes applications on production. See Kubectl Book.

Creating objects

Kubernetes manifests can be defined in YAML or JSON. The file extensions .yaml, .yml, and .json can be used.

kubectl apply -f ./my-manifest.yaml # create resource(s)
kubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files
kubectl apply -f ./dir # create resource(s) in all manifest files in dir
kubectl apply -f https://git.io/vPieo # create resource(s) from url
kubectl create deployment nginx --image=nginx # start a single instance of nginx


# create a Job which prints "Hello World"
kubectl create job hello --image=busybox -- echo "Hello World"


# create a CronJob that prints "Hello World" every minute
kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World"


kubectl explain pods # get the documentation for pod manifests


# Create multiple YAML objects from stdin
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
---
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep-less
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000"
EOF


# Create a secret with several keys
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: $(echo -n "s33msi4" | base64 -w0)
username: $(echo -n "jane" | base64 -w0)
EOF

Viewing, finding resources

# Get commands with basic output
kubectl get services # List all services in the namespace
kubectl get pods --all-namespaces # List all pods in all namespaces
kubectl get pods -o wide # List all pods in the current namespace, with more details
kubectl get deployment my-dep # List a particular deployment
kubectl get pods # List all pods in the namespace
kubectl get pod my-pod -o yaml # Get a pod's YAML


# Describe commands with verbose output
kubectl describe nodes my-node
kubectl describe pods my-pod


# List Services Sorted by Name
kubectl get services --sort-by=.metadata.name


# List pods Sorted by Restart Count
kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'


# List PersistentVolumes sorted by capacity
kubectl get pv --sort-by=.spec.capacity.storage


# Get the version label of all pods with label app=cassandra
kubectl get pods --selector=app=cassandra -o \
jsonpath='{.items[*].metadata.labels.version}'


# Retrieve the value of a key with dots, e.g. 'ca.crt'
kubectl get configmap myconfig \
-o jsonpath='{.data.ca\.crt}'


# Get all worker nodes (use a selector to exclude results that have a label
# named 'node-role.kubernetes.io/master')
kubectl get node --selector='!node-role.kubernetes.io/master'


# Get all running pods in the namespace
kubectl get pods --field-selector=status.phase=Running


# Get ExternalIPs of all nodes
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'


# List Names of Pods that belong to Particular RC
# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/
sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})


# Show labels for all pods (or any other Kubernetes object that supports labelling)
kubectl get pods --show-labels


# Check which nodes are ready
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"


# Output decoded secrets without external tools
kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'


# List all Secrets currently in use by a pod
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq


# List all containerIDs of initContainer of all pods
# Helpful when cleaning up stopped containers, while avoiding removal of initContainers.
kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3


# List Events sorted by timestamp
kubectl get events --sort-by=.metadata.creationTimestamp


# Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.
kubectl diff -f ./my-manifest.yaml


# Produce a period-delimited tree of all keys returned for nodes
# Helpful when locating a key within a complex nested JSON structure
kubectl get nodes -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'


# Produce a period-delimited tree of all keys returned for pods, etc
kubectl get pods -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'


# Produce ENV for all pods, assuming you have a default container for the pods, default namespace and the `env` command is supported.
# Helpful when running any supported command across all pods, not just `env`
for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod -- env; done

Updating resources

kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image
kubectl rollout history deployment/frontend # Check the history of deployments including the revision
kubectl rollout undo deployment/frontend # Rollback to the previous deployment
kubectl rollout undo deployment/frontend --to-revision=2 # Rollback to a specific revision
kubectl rollout status -w deployment/frontend # Watch rolling update status of "frontend" deployment until completion
kubectl rollout restart deployment/frontend # Rolling restart of the "frontend" deployment




cat pod.json | kubectl replace -f - # Replace a pod based on the JSON passed into std


# Force replace, delete and then re-create the resource. Will cause a service outage.
kubectl replace --force -f ./pod.json


# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000
kubectl expose rc nginx --port=80 --target-port=8000


# Update a single-container pod's image version (tag) to v4
kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f -


kubectl label pods my-pod new-label=awesome # Add a Label
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annotation
kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo"

Patching resources

# Partially update a node
kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'


# Update a container's image; spec.containers[*].name is required because it's a merge key
kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'


# Update a container's image using a json patch with positional arrays
kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'


# Disable a deployment livenessProbe using a json patch with positional arrays
kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'


# Add a new element to a positional array
kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'

Editing resources

Edit any API resource in your preferred editor.

kubectl edit svc/docker-registry # Edit the service named docker-registry
KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Use an alternative editor

Scaling resources

kubectl scale --replicas=3 rs/foo # Scale a replicaset named 'foo' to 3
kubectl scale --replicas=3 -f foo.yaml # Scale a resource specified in "foo.yaml" to 3
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # If the deployment named mysql's current size is 2, scale mysql to 3
kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale multiple replication controllers

Deleting resources

kubectl delete -f ./pod.json # Delete a pod using the type and name specified in pod.json
kubectl delete pod unwanted --now # Delete a pod with no grace period
kubectl delete pod,service baz foo # Delete pods and services with same names "baz" and "foo"
kubectl delete pods,services -l name=myLabel # Delete pods and services with label name=myLabel
kubectl -n my-ns delete pod,svc --all # Delete all pods and services in namespace my-ns,
# Delete all pods matching the awk pattern1 or pattern2
kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs kubectl delete -n mynamespace pod

Interacting with running Pods

kubectl logs my-pod # dump pod logs (stdout)
kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container
kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
kubectl logs -l name=myLabel -c my-container # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container
kubectl logs -f my-pod # stream pod logs (stdout)
kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)
kubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout)
kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n mynamespace # Start a single instance of nginx pod in the namespace of mynamespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml


kubectl attach my-pod -i # Attach to Running Container
kubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod
kubectl exec my-pod -- ls / # Run command in existing pod (1 container case)
kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)
kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)
kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'

Copy files and directories to and from containers

kubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace
kubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace
kubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally

Note:kubectl cprequires that the 'tar' binary is present in your container image. If 'tar' is not present,kubectl cpwill fail. For advanced use cases, such as symlinks, wildcard expansion or file mode preservation consider using kubectl exec.

tar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace
kubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally

Interacting with Deployments and Services

kubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)
kubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)


kubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend
kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name <my-service-port>


kubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by <my-deployment>
kubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)

Interacting with Nodes and cluster

kubectl cordon my-node # Mark my-node as unschedulable
kubectl drain my-node # Drain my-node in preparation for maintenance
kubectl uncordon my-node # Mark my-node as schedulable
kubectl top node my-node # Show metrics for a given node
kubectl cluster-info # Display addresses of the master and services
kubectl cluster-info dump # Dump current cluster state to stdout
kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state


# If a taint with that key and effect already exists, its value is replaced as specified.
kubectl taint nodes foo dedicated=special-user:NoSchedule

Resource types

List all supported resource types along with their shortnames, API group, whether they are namespaced, and Kind:

kubectl api-resources

Other operations for exploring API resources:

kubectl api-resources --namespaced=true # All namespaced resources
kubectl api-resources --namespaced=false # All non-namespaced resources
kubectl api-resources -o name # All resources with simple output (only the resource name)
kubectl api-resources -o wide # All resources with expanded (aka "wide") output
kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs
kubectl api-resources --api-group=extensions # All resources in the "extensions" API group

Formatting output

To output details to your terminal window in a specific format, add the -o(or --output) flag to a supported kubectlcommand.

Output formatDescription
-o=custom-columns=<spec>Print a table using a comma separated list of custom columns
-o=custom-columns-file=<filename>Print a table using the custom columns template in the <filename>file
-o=jsonOutput a JSON formatted API object
-o=jsonpath=<template>Print the fields defined in a jsonpathexpression
-o=jsonpath-file=<filename>Print the fields defined by the jsonpathexpression in the <filename>file
-o=namePrint only the resource name and nothing else
-o=wideOutput in the plain-text format with any additional information, and for pods, the node name is included
-o=yamlOutput a YAML formatted API object
Examples using -o=custom-columns:

# All images running in a cluster
kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'


# All images running in namespace: default, grouped by Pod
kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"


# All images excluding "k8s.gcr.io/coredns:1.6.2"
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'


# All fields under metadata regardless of name
kubectl get pods -A -o=custom-columns='DATA:metadata.*'
More examples in the kubectl reference documentation.

Kubectl output verbosity and debugging

Kubectl verbosity is controlled with the -vor --vflags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described here.

VerbosityDescription
--v=0Generally useful for this to alwaysbe visible to a cluster operator.
--v=1A reasonable default log level if you don't want verbosity.
--v=2Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems.
--v=3Extended information about changes.
--v=4Debug level verbosity.
--v=5Trace level verbosity.
--v=6Display requested resources.
--v=7Display HTTP request headers.
--v=8Display HTTP request contents.
--v=9Display HTTP request contents without truncation of contents.

Friday, 29 September 2017

Working with PuppetLabs Using Vagrant


While working as DevOps Engineer one of the tools by definition that we use more, often is Puppet. Guess most people will think that you are 100% expert, which is not always the case. So, I need to create this post and track record of my own experiments with Puppet and (PuppetLabs + Vagrant)

You might as what is Vagrant ?
==> "Vagrant is an open-source software product for building and maintaining portable virtual software development environments, e.g. for VirtualBox, Hyper-V, Docker, VMware, and AWS. ... Vagrant simplifies the necessary software configuration management in order to increase development productivity. " read more here: https://www.vagrantup.com/intro/index.html

The initial indentation was to gain better understanding puppet file structure. So, I decided to use Puppetlabs for this ...
















So, basically. I did create installed

  • Installed Vagrant
  • Used the Vagrant init command to pull the puppetlabs ubuntu VM
  • Which created a file "Vagrantfile"
  • created a dir puppetlabs


And then started the Setup, so here are the Logs ..

Tdls-Air:puppetlabs psalms91$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'puppetlabs/ubuntu-16.04-32-puppet' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: 1.0.0
==> default: Loading metadata for box 'puppetlabs/ubuntu-16.04-32-puppet'
    default: URL: https://vagrantcloud.com/puppetlabs/ubuntu-16.04-32-puppet
==> default: Adding box 'puppetlabs/ubuntu-16.04-32-puppet' (v1.0.0) for provider: virtualbox
    default: Downloading: https://vagrantcloud.com/puppetlabs/boxes/ubuntu-16.04-32-puppet/versions/1.0.0/providers/virtualbox.box
==> default: Successfully added box 'puppetlabs/ubuntu-16.04-32-puppet' (v1.0.0) for 'virtualbox'!
==> default: Importing base box 'puppetlabs/ubuntu-16.04-32-puppet'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'puppetlabs/ubuntu-16.04-32-puppet' is up to date...
==> default: Setting the name of the VM: puppetlabs_default_1506687306250_65705
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default:
    default: Guest Additions Version: 5.0.20
    default: VirtualBox Version: 5.1
==> default: Mounting shared folders...
    default: /vagrant => /Users/psalms91/Vagrant_VM/puppetlabs
Tdls-Air:puppetlabs psalms91$

Tdls-Air:puppetlabs psalms91$ vagrant ssh

Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic i686)

 * Documentation:  https://help.ubuntu.com/

vagrant@localhost:~$

After this I have my Puppet Labs VM running ... After this point was easy, I just needed to goo into the puppet installation dir and look into the dir structure.


done.. :-)

Tuesday, 10 September 2013

Linux Containers on Virtualbox - Disposal Boxes by Michal Migurski's

Hey look, a month went by and I stopped blogging because I have a new job. Great.
One of my responsibilities is keeping an eye on our sprawling Github account, currently at 326 repositories and 151 members. The current fellows are working on a huge number of projects and I frequently need to be able to quickly install, test and run projects with a weirdly-large variety of backend and server technologies. So, it’s become incredibly important to me to be able to rapidly spin up disposable Linux web servers to test with. Seth clued me in to Linux Containers (LXC) for this:
LXC provides operating system-level virtualization not via a full blown virtual machine, but rather provides a virtual environment that has its own process and network space. LXC relies on the Linux kernel cgroups functionality that became available in version 2.6.24, developed as part of LXC. … It is used by Heroku to provide separation between their “dynos.”
I use a Mac, so I’m running these under Virtualbox. I move around between a number of different networks, so each server container had to have a no-hassle network connection. I’m also impatient, so I really needed to be able to clone these in seconds and have them ready to use.
This is a guide for creating an Ubuntu Linux virtual machine under Virtualbox to host individual containers with simple two-way network connectivity. You’ll be able to clone a container with a single command, and connect to it using a simple <container>.local host name.

The Linux Host

First, download an Ubuntu ISO. I try to stick to the long-term support releases, so I’m using Ubuntu 12.04 here. Get a copy of Virtualbox, also free.
Create a new Virtualbox virtual machine to boot from the Ubuntu installation ISO. For a root volume, I selected the VDI format with a size of 32GB. The disk image will expand as it’s allocated, so it won’t take up all that space right away. I manually created three partitions on the volume:
  1. 4.0 GB ext4 primary.
  2. 512 MB swap, matching RAM size. Could use more.
  3. All remaining space btrfs, mounted at /var/lib/lxc.
Btrfs (B-tree file system, pronounced “Butter F S”, “Butterfuss”, “Better F S”, or “B-tree F S") is a GPL-licensed experimental copy-on-write file system. It will allow our cloned containers to occupy only as much disk space as is changed, which will decrease the overall file size of the virtual machine.
During the OS installation process, you’ll need to select a host name. I used “ubuntu-demo” for this demonstration.

Host Linux Networking

Boot into Linux. I started by installing some basics, for me: git, vim, tcsh, screen, htop, and etckeeper.
Set up /etc/network/interfaces with two bridges for eth0 and eth1, both DHCP. Note that eth0 and eth1 must be commented-out, as in this sample part of my /etc/network/interfaces:
## The primary network interface
#auto eth0
#iface eth0 inet dhcp

auto br0
iface br0 inet dhcp
        dns-nameservers 8.8.8.8
        bridge_ports eth0
        bridge_fd 0
        bridge_maxwait 0

auto br1
iface br1 inet dhcp
        bridge_ports eth1
        bridge_fd 0
        bridge_maxwait 0
Back in Virtualbox preferencese, create a new network adapter and call it “vboxnet0”. My settings are 10.1.0.1, 255.255.255.0, with DHCP turned on.


Shut down the Linux host, and add the secondary interface in Virtual box. Choose host-only networking, the vboxnet0 adapter, and “Allow All” promiscuous mode so that the containers can see inbound network traffic.

The primary interface will be NAT by default, which will carry normal out-bound internet traffic.
  1. Adapter 1: NAT (default)
  2. Adapter 2: Host-Only vboxnet0
Start up the Linux host again, and you should now be able to ping the outside world.
% ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=63 time=340 ms
…
Use ifconfig to find your Linux IP address (mine is 10.1.0.2), and try ssh’ing to that address from your Mac command line with the username you chose during initial Ubuntu installation.
% ifconfig br1

br1       Link encap:Ethernet  HWaddr 08:00:27:94:df:ed  
          inet addr:10.1.0.2  Bcast:10.1.0.255  Mask:255.255.255.0
          inet6 addr: …
Next, we’ll set up Avahi to broadcast host names so we don’t need to remember DHCP-assigned IP addresses. On the Linux host, install avahi-daemon:
% apt-get install avahi-daemon
In the configuration file /etc/avahi/avahi-daemon.conf, change these lines to clarify that our host names need only work on the second, host-only network adapter:
allow-interfaces=br1,eth1
deny-interfaces=br0,eth0,lxcbr0
Then restart Avahi.
% sudo service avahi-daemon restart
Now, you should be able to ping and ssh to ubuntu-demo.local from within the virtual machine and your Mac command line.

No Guest Containers

So far, we have a Linux virtual machine with a reliable two-way network connection that’s resilient to external network failures, available via a meaningful host name, and with a slightly funny disk setup. You could stop here, skipping the LXC steps and use Virtualbox’s built-in cloning functionality or something like Vagrant to set up fresh development environments. I’m going to keep going and set up LXC.

Linux Guest Containers

Install LXC.
% sudo apt-get lxc
Initial LXC setup uses templates, and on Ubuntu there are several useful ones that come with the package. You can find them under /usr/lib/lxc/templates; I have templates for ubuntu, fedora, debian, opensuse, and other popular Linux distributions. To create a new container called “base” use lxc-create with a chosen template.
% sudo lxc-create -n base -t ubuntu
This takes a few minutes, because it needs retrieve a bunch of packages for a minimal Ubuntu system. You’ll see this message at some point:
##
# The default user is 'ubuntu' with password 'ubuntu'!
# Use the 'sudo' command to run tasks as root in the container.
##
Without starting the container, modify its network adapters to match the two we set up earlier. Edit the top of /var/lib/lxc/base/config to look something like this:
lxc.network.type=veth
lxc.network.link=br0
lxc.network.flags=up
lxc.network.hwaddr = 00:16:3e:c2:9d:71

lxc.network.type=veth
lxc.network.link=br1
lxc.network.flags=up
lxc.network.hwaddr = 00:16:3e:c2:9d:72
An initial MAC address will be randomly generated for you under lxc.network.hwaddr, just make sure that the second one is different.
Modify the container’s network interfaces by editing /var/lib/lxc/base/rootfs/etc/network/interfaces (/var/lib/lxc/base/rootfs is the root filesystem of the new container) to look like this:
auto eth0
iface eth0 inet dhcp
        dns-nameservers 8.8.8.8

auto eth1
iface eth1 inet dhcp
Now your container knows about two network adapters, and they have been bridged to the Linux host OS virtual machine NAT and host-only adapters. Start your new container:
% sudo lxc-start -n base
You’ll see a normal Linux login screen at first, use the default username and password “ubuntu” and “ubuntu” from above. The system starts out with minimal packages. Install a few so you can get around, and include language-pack-en so you don’t get a bunch of annoying character set warnings:
% sudo apt-get install language-pack-en
% sudo apt-get install git vim tcsh screen htop etckeeper
% sudo apt-get install avahi-daemon
Make a similar change to the /etc/avahi/avahi-daemon.conf as above:
allow-interfaces=eth1
deny-interfaces=eth0
Shut down to return to the Linux host OS.
% sudo shutdown -h now
Now, restart the container with all the above modifications, in daemon mode.
% sudo lxc-start -d -n base
After it’s started up, you should be able to ping and ssh to base.local from your Linux host OS and your Mac.
% ssh ubuntu@base.local

Cloning a Container

Finally, we will clone the base container. If you’re curious about the effects of Btrfs, check the overall disk usage of the /var/lib/lxc volume where the containers are stored:
% df -h /var/lib/lxc

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        28G  572M   26G   3% /var/lib/lxc
Clone the base container to a new one, called “clone”.
% sudo lxc-clone  -o base -n clone
Look at the disk usage again, and you will see that it’s not grown by much.
% df -h /var/lib/lxc

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        28G  573M   26G   3% /var/lib/lxc
If you actually look at the disk usage of the individual container directories, you’ll see that Btrfs is allowing 1.1GB of files to live in just 573MB of space, representing the repeating base files between the two containers.
% sudo du -sch /var/lib/lxc/*

560M /var/lib/lxc/base
560M /var/lib/lxc/clone
1.1G total
You can now start the new clone container, connect to it and begin making changes.
% sudo lxc-start -d -n clone
% ssh ubuntu@clone.local

Conclusion

I have been using this setup for the past few weeks, currently with a half-dozen containers that I use for a variety of jobs: testing TileStache, installing Rails applications with RVM, serving Postgres data, and checking out new packages. One drawback that I have encountered is that as the disk image grows, my nightly time machine backups grow considerably. The Mac host OS can only see the Linux disk image as a single file.
On the other hand, having ready access to a variety of local Linux environments has been a boon to my ability to quickly try out ideas. Special thanks again to Seth for helping me work through some of the networking ugliness.

Further Reading

Tao of Mac has an article on a similar, but slightly different Virtualbox and LXC setup. They don’t include the promiscuous mode setting for the second network adapter, which I think is why they advise using Avahi and port forwarding to connect to the machine. I believe my way here might be easier.
Shift describes a Vagrant and LXC setup that skips Avahi and uses a plain hostnames for internal connectivity.

The Owner of this post is Michal Migurski
Find is Blog here http://mike.teczno.com/notes/disposable-virtualbox-lxc-environments.html 

How to check for open ports on Linux

Checking for open ports is among the first steps to secure your device. Listening services may be the entrance for attackers who may exploit...