diff --git a/DevOps/Docker/AWS_SAM_and_Docker.md b/DevOps/Docker/AWS_SAM_and_Docker.md new file mode 100644 index 0000000..fc7cf36 --- /dev/null +++ b/DevOps/Docker/AWS_SAM_and_Docker.md @@ -0,0 +1,42 @@ +--- +categories: + - DevOps +tags: [docker, gpt-output] +--- + +# AWS SAM and Docker + +_How would you use docker containers in AWS. How would this integrate with SAM. What are the benefits over just deploying from local via SAM_ + +Amazon Web Services (AWS) provides several services for running Docker containers, such as Amazon Elastic Container Service (ECS) and AWS Fargate. To integrate Docker containers with AWS Serverless Application Model (SAM), you can use the AWS Lambda container image support feature, which allows you to package and deploy Lambda functions as container images. + +Here's a brief overview of how to use Docker containers in AWS with SAM: + +1. Create a Dockerfile for your Lambda function, using one of the AWS-provided base images for your preferred runtime (e.g., Python, Node.js, etc.). +2. Build the Docker image using the docker build command. +3. Push the Docker image to Amazon Elastic Container Registry (ECR) using the docker push command. +4. In your AWS SAM template (template.yaml), define a Lambda function with the ImageUri property pointing to the Docker image stored in ECR. +5. Deploy the serverless application using the sam deploy command. + +## Benefits of using Docker containers with AWS SAM over deploying from local: + +- Consistency: + + - Packaging your Lambda function as a container image ensures that the environment and dependencies are consistent across development, testing, and production stages. It eliminates the "it works on my machine" problem. + +- Custom runtimes: + + - AWS Lambda provides a set of pre-defined runtimes, but with container image support, you can use custom runtimes, giving you more flexibility and control over the execution environment. + +- Easier dependency management: + + - For complex applications with numerous dependencies or large package sizes, using container images simplifies dependency management and can help you stay within the Lambda package size limits. + +- Reuse of existing Docker workflows: + + - If you already have Docker workflows in place for building, testing, and deploying containerized applications, you can easily extend these workflows for your Lambda functions, reducing the learning curve and operational overhead. + +- Ease of local testing: + - With AWS SAM CLI, you can test your Lambda functions locally as container images, closely mimicking the actual execution environment and reducing the chances of unexpected issues when deploying to AWS. + +> While using Docker containers with AWS SAM has its benefits, it may not be suitable for every use case. If your Lambda functions have minimal dependencies and simple execution environments, deploying directly from local using SAM might be more straightforward and faster. However, for more complex applications, using Docker containers can provide better consistency, flexibility, and control over the execution environment. diff --git a/DevOps/Docker/Containeriztion.md b/DevOps/Docker/Containerization.md similarity index 100% rename from DevOps/Docker/Containeriztion.md rename to DevOps/Docker/Containerization.md diff --git a/DevOps/Docker/Creating_a_Docker_image.md b/DevOps/Docker/Creating_a_Docker_image.md new file mode 100644 index 0000000..882e545 --- /dev/null +++ b/DevOps/Docker/Creating_a_Docker_image.md @@ -0,0 +1,132 @@ +--- +categories: + - DevOps +tags: [docker] +--- + +# Creating a Docker image + +We create a new image with `docker build...` + +The key components of a build: + +- A Dockerfile +- An asset directory (the **build context**) containing the Dockerfile and all resources needed for the build. + +## Syntax + +Here is an example of a Dockerfile that builds a container for running a Python app: + +```Dockerfile +# Use an official Python runtime as a parent image +FROM python:3.8-slim + +# Set the working directory to /app +WORKDIR /app + +# Copy the current directory contents into the container at /app +COPY . /app + +# Install any needed packages specified in requirements.txt +RUN pip install --trusted-host pypi.python.org -r requirements.txt + +# Make port 80 available to the world outside this container +EXPOSE 80 + +# Define environment variable +ENV NAME World + +# Run app.py when the container launches +CMD ["python", "app.py"] + +``` + +### Main syntax explained + +| Keyword | Role | +| --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| # | Lines starting with '#' are comments, providing explanations or context for the Dockerfile. | +| `FROM` | Specifies the base image used to build the new image. In this case, it's using the official Python 3.8 slim image. | +| `WORKDIR` | Sets the working directory for any subsequent instructions in the Dockerfile. Here, it's set to '/app'. Everything after a `WORKDIR` line, all instructions will be executed in this directory. If it doesn't exist, Docker will create it. Can have multiple. | +| `COPY` | Copies files or directories from the source (in this case, the current directory) to the destination (in this case, '/app') within the Docker image. The source is the build context. We are copying from the build context to the container when we use this keyword. | +| `ADD` | Same as `COPY` but can be passed a tarball or a URL | +| `RUN` | Executes a command and commits the result as a new layer in the image. In this example, it installs packages specified in the 'requirements.txt' file using pip. | +| `EXPOSE` | Informs Docker that the container listens on the specified network ports at runtime. In this case, port 80. | +| `ENV` | Sets an environment variable in the image. In this example, the variable 'NAME' is set to 'World'. | +| `CMD` | Specifies the command to run when the container starts. Here, it runs 'app.py' using Python. Note that there can only be one CMD instruction in a Dockerfile. | +| `USER` | The default user will be `root`. This is not good for security. Have to choose a distribution image that allows you to change user. For example: `USER: nobody:nogroup`. Ubuntu allows for this. | +| `LABEL` | Metadata that you want to add to the image to be viewable when image is inspected, e.g. `LABEL maintainer=tactonbishop@gmail.com` | + +### Another example + +The following Dockerfile creates a frontend React application: + +```Dockerfile +# Use the official Node.js image as the base image +FROM node:14-alpine + +# Set the working directory to /app +WORKDIR /app + +# Copy the package.json and package-lock.json files +COPY package*.json ./ + +# Install dependencies +RUN npm ci + +# Copy the rest of the application code +COPY . . + +# Expose the port the app will run on +EXPOSE 3000 + +# Start the React development server +CMD ["npm", "start"] +``` + +### Two modes of `RUN`: shell and exec + +The RUN command in a Dockerfile has two forms: the "exec" form and the "shell" form. Both forms are used to execute commands during the build process, but they have different syntax and behavior. + +Exec form is written as `RUN ["executable", "param1", "param2", ...]`. This form executes the command directly without invoking a shell. As a result, shell processing features like environment variable substitution, pipes, and redirects are not available. It is the preferred form when you need to run a command without relying on shell behavior or when you want to avoid shell-related issues, such as variable substitution or command injection. + +Shell form is written as `RUN command param1 param2 ....` This form executes the command within a shell, which is /bin/sh -c on Linux and cmd /S /C on Windows. It allows for shell processing, enabling environment variable substitution, pipes, redirects, and other shell features. This form is preferred when you need to use shell features or when you want to chain multiple commands together. + +## Environmental variables + +> ENV instructions in a Dockerfile are used to define environment variables that can be accessed by the processes running inside the container. These variables can be used to configure the behavior of the application, pass parameters, or store sensitive information like API keys. + +```Dockerfile +# Use an official Node.js runtime as a parent image +FROM node:14-alpine + +# Set the working directory to /app +WORKDIR /app + +# Copy package.json and package-lock.json into the container +COPY package*.json ./ + +# Install any needed packages specified in package.json +RUN npm ci + +# Copy the rest of the application code +COPY . . + +# Set environment variables +ENV PORT=8000 +ENV API_KEY=my_secret_key + +# Expose the port the app will run on +EXPOSE 8000 + +# Run the app +CMD ["npm", "start"] +``` + +In this example, two environment variables are defined: PORT and API_KEY. These variables can be accessed in the application code using process.env.PORT and process.env.API_KEY in Node.js, for example. + +To override the environment variables defined in the Dockerfile when running the container, you can use the -e flag followed by the variable name and its new value in the docker run command: + +```sh +docker run -e PORT=8080 -e API_KEY=new_secret_key -p 8080:8000 +``` diff --git a/DevOps/Docker/Docker_Compose.md b/DevOps/Docker/Docker_Compose.md new file mode 100644 index 0000000..7159fc4 --- /dev/null +++ b/DevOps/Docker/Docker_Compose.md @@ -0,0 +1,85 @@ +--- +categories: + - DevOps +tags: [docker] +--- + +# Docker Compose + +- Docker Compose (DC) is a tool for defining and running multi-container applications. A typical example would be combining a FE with a BE implementation and database as three related but distinct containers. + +- DC is a prime example of _infrastructure as code_. + +- A YAML file defines all the required configuration and **all services can be started with a single command**. + +- It provides a consistent configuration accross environments: development, test, production + +## Composition + +Three step process: + +1. Define your application image in a Dockerfile +2. Define the services in `docker-compose.yaml` (backend services, networks, volumes) +3. Start the entire application with `docker compose up` + +### `docker-compose.yml` + +- Must always begin with `version` +- Must have `services` map + - This defines the different components that comprise your application + - Services can be either an image reference (the one that generates the container you want to include) or a build context + +Look at the following example. It runs a Python Flask frontend connected to a Redis key-value pair database for backend. + +```yml +# docker-compose.yml + +version: "3.9" +services: + web: + build: . # here we are using the current directory as the build context + image: myapp:1.0 + ports: + - "5050:5000" + depends_on: + - redis + redis: + image: "redis:alpine" +``` + +`depends_on` is important - it means this container must be running first before any others. If this is a DB, it means the data is accessible before the frontend request start. + +## Main commands + +```sh +# start the composition: +docker compose up -d + +# start a specific service within the composition: +docker compose up -d web +docker compose up -d redis + +# view logs for a specific service +docker compose logs --follow web + +# view processes by activity +docker compose top +``` + +## Environment variables + +You can define them in the compose file or name them and leave them blank. If you do this, Docker will look in the host environment for the variable which is really handy: + +```yml +... +services: + redis: + ... + environment: + ENV1: 'some value' + ENV2: +``` + +## See also + +[NodeJS and MySQL Docker backend](/DevOps/Docker/Docker_Examples/Node_and_MySQL_db.md) diff --git a/DevOps/Docker/Docker_Examples/Connecting_a_frontend_to_a_Docker_backend.md b/DevOps/Docker/Docker_Examples/Connecting_a_frontend_to_a_Docker_backend.md new file mode 100644 index 0000000..a44bae1 --- /dev/null +++ b/DevOps/Docker/Docker_Examples/Connecting_a_frontend_to_a_Docker_backend.md @@ -0,0 +1,38 @@ +--- +categories: + - DevOps + - Databases +tags: [docker, SQL, node-js] +--- + +# Connecting a frontend to a Docker backend + +Building on from [NodeJS backend with MySQL database](/DevOps/Docker/Docker_Examples/Node_and_MySQL_db.md) we can add a frontend by adapting the existing [Docker Compose](/DevOps/Docker/Docker_Compose.md) files (one for each environment) to accept an additional dependency. + +We won't create a container for the frontend as this is not necessary. + +We would add the following to the `services` section of the compose file: + +```yml +services: + ... + frontend: + build: ./frontend + container_name: frontend_app + ports: + - "3000:3000" + environment: + - REACT_APP_API_URL=http://localhost:3001 + depends_on: + - app +``` + +The `REACT_APP_API_URL` environment variable tells the frontend application where to find the backend API. This will then be accessbile via `process.env` and can be used directly in the frontend, e.g: + +```js +fetch(`${process.env.REACT_APP_API_URL}/api/data`) + .then((response) => response.json()) + .then((data) => { + // Process the data from the backend API + }); +``` diff --git a/DevOps/Docker/Docker_Examples/Node_and_MySQL_db.md b/DevOps/Docker/Docker_Examples/Node_and_MySQL_db.md new file mode 100644 index 0000000..6072021 --- /dev/null +++ b/DevOps/Docker/Docker_Examples/Node_and_MySQL_db.md @@ -0,0 +1,240 @@ +--- +categories: + - DevOps + - Databases +tags: [docker, SQL, node-js] +--- + +# Docker example: NodeJS backend with MySQL database + +We will utilise [Docker Compose](/DevOps/Docker/Docker_Compose.md) to combine two containers: + +- A container for the NodeJS backend +- A container for the MySQL database + +We will only create a Dockerfile for the NodeJS part since the existing `mysql` image is sufficient for most needs and does not require a specific configuration. + +Each of the files listed below would be saved to the same source directory which would then form the basis of the [build context](/DevOps/Docker/Creating_a_Docker_image.md#creating-a-docker-image). + +## Docker Compose file + +```yml +# docker-compose.yml +version: "3.8" +services: + db: + image: mysql:8.0 + container_name: mysql_container + environment: + MYSQL_ROOT_PASSWORD: your_root_password + MYSQL_DATABASE: your_database_name + MYSQL_USER: your_database_user + MYSQL_PASSWORD: your_database_password + volumes: + - mysql-data:/var/lib/mysql + ports: + - "3306:3306" + app: + build: . + container_name: node_app + volumes: + - .:/usr/src/app + environment: + MYSQL_HOST: db + MYSQL_USER: your_database_user + MYSQL_PASSWORD: your_database_password + MYSQL_DB: your_database_name + depends_on: + - db + ports: + - "3000:3000" + +volumes: + mysql-data: +``` + +## Dockerfile for the NodeJS backend + +```Dockerfile +# Dockerfile +FROM node:14 + +WORKDIR /usr/src/app + +COPY package*.json ./ + +RUN npm install + +COPY . . + +EXPOSE 3000 + +CMD [ "node", "app.js" ] +``` + +## NodeJS project setup + +```json +// package.json +{ + "name": "node-mysql-docker", + "version": "1.0.0", + "description": "Node.js and MySQL with Docker", + "main": "app.js", + "scripts": { + "start": "node app.js" + }, + "dependencies": { + "express": "^4.17.1", + "mysql2": "^2.3.2" + } +} +``` + +```js +// app.js +const express = require("express"); +const mysql = require("mysql2/promise"); + +const app = express(); + +const { MYSQL_HOST, MYSQL_USER, MYSQL_PASSWORD, MYSQL_DB } = process.env; + +const createConnection = async () => { + return await mysql.createConnection({ + host: MYSQL_HOST, + user: MYSQL_USER, + password: MYSQL_PASSWORD, + database: MYSQL_DB, + }); +}; + +app.get("/", async (req, res) => { + const connection = await createConnection(); + const [rows] = await connection.query("SELECT 1 + 1 AS solution"); + res.send(`Hello World! The solution is ${rows[0].solution}`); +}); + +const PORT = 3000; +app.listen(PORT, () => { + console.log(`Server is running on port ${PORT}`); +}); +``` + +To start up the environment you would then run: + +``` +docker-compose -up +``` + +## Environments + +In the example, the database connection information in the Node source is coming from the [`process.env`](/Programming_Languages/NodeJS/Architecture/Managing_environments.md) object, which itself is sourcing the values `MYSQL_HOST`, `MYSQL_PASSWORD` etc from the Docker compose file. Therefore these values are hardcoded there. + +This is not good practice as it exposes sensitive information and make managing different deployment environments (development, stage, test etc.) difficult. + +To get around this we would create an `.env` file in the project directory that is Git ignored: + +```sh +# .env +MYSQL_ROOT_PASSWORD=your_root_password +MYSQL_DATABASE=your_database_name +MYSQL_USER=your_database_user +MYSQL_PASSWORD=your_database_password +``` + +Then the `docker-compose.yml` file can be updated to use these variables: + +```yml +version: "3.8" +services: + db: + image: mysql:8.0 + container_name: mysql_container + environment: + MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} + MYSQL_DATABASE: ${MYSQL_DATABASE} + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + volumes: + - mysql-data:/var/lib/mysql + ports: + - "3306:3306" + app: + build: . + container_name: node_app + volumes: + - .:/usr/src/app + environment: + MYSQL_HOST: db + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DB: ${MYSQL_DATABASE} + depends_on: + - db + ports: + - "3000:3000" + +volumes: + mysql-data: +``` + +`${VARIABLE_NAME}` syntax is used to reference environment variables from the .env file in the `docker-compose.yml` file. Docker Compose will automatically load the variables from the .env file when starting the services. + +### Development, staging, production environments + +To specify different connection details for different environments you would create different `.env` files for each: + +- `.env.development` +- `.env.staging` +- `.env.production` + +Each file will contain **environment-specific variables**, such as database credentials, API keys, and other configuration details. + +For example, development and production: + +```yml +# docker-compose.development.yml +version: '3.8' +services: + db: + ... + app: + ... + environment: + MYSQL_HOST: db + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DB: ${MYSQL_DATABASE} + NODE_ENV: development + +``` + +```yml +# docker-compose.production.yml +version: '3.8' +services: + db: + ... + app: + ... + environment: + MYSQL_HOST: db + MYSQL_USER: ${MYSQL_USER} + MYSQL_PASSWORD: ${MYSQL_PASSWORD} + MYSQL_DB: ${MYSQL_DATABASE} + NODE_ENV: production +` +``` + +Then you would select the specific environment with your run command: + +``` +docker-compose -f docker-compose.development.yml up -d +``` + +Docker won't know by default which `.env` file to use from that command however. Assuming all the files are in the same directory you can use Bash substitution to specify the source of the environment specific variables: + +```sh +export $(cat .env.development | xargs) && docker-compose -f docker-compose.development.yml up -d +``` diff --git a/DevOps/Docker/Docker_containers.md b/DevOps/Docker/Docker_containers.md index 3214cb5..ad1286e 100644 --- a/DevOps/Docker/Docker_containers.md +++ b/DevOps/Docker/Docker_containers.md @@ -69,7 +69,7 @@ docker stop 2749 There will be a delay because it shuts down gracefully. It sends a SIGINT to the process in the container with PID 1 (i.e the root or parent process for the container). -`stop` will keep the container in memory. This means you can still refer back to the logs and that it can be restarted. +`stop` will keep the container in memory. This means you can still refer bak to the logs and that it can be restarted. Instead of `stop`, if you were to use: @@ -85,6 +85,84 @@ We can also tell Docker to immediately remove a container after it exits: docker run --rm [image] ``` +## Interacting with containers + +In the examples so far the container is a closed box. You don't have a terminal through which you can interact with the container as you would with an OS. You can only start the container and view its activities by accessing the logs. + +For images that have an OS we can use `-i -t` to launch a terminal in interactive mode. + +```sh +docker run -i -t debian /bin/bash +root@0022da12f2f2:/# ls +bin boot dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var +root@0022da12f2f2:/# whoami +root +root@0022da12f2f2:/# ls usr/ +bin games include lib libexec local sbin share src +``` + +Note that it defaults to the root user. + +Containers are lightweight and should only contain the minium environment needed to run an application. For this reason OSs that are included in an image are often very stripped-back and many programs and processes will not be there by default. (Although you could install a package manager and install the commands you need.) + +The previous command launches the container and enters into it with a terminal. If a container is already running, you can enter into it in terminal mode with `exec`: + +``` +docker exec -i -t my_container bash +``` + +If a container is running in detached mode we can attach to the main process with `docker attach` + +``` +docker attach my_container +``` + ## Container lifecycle -## Interacting with containers +![](/_img/container-lifecycle.png) + +All containers have a lifecycle represented by five distinct states. Each state has an associated command: + +- created + - `docker create`, `docker run` +- running +- paused + - `docker pause`, `docker unpause` +- stopped + - `docker stop`, `docker kill` +- removed + - `docker rm` + +## Main container commands + +```sh +docker create # create container from image +docker run # create and start container from image +docker run --name # add custom name for container +docker ps # list currently active containers +docker ps -a # list all containers (inactive and active) +docker logs # display a container log +docker stop # stop a running container +docker start # starts a stopped (exited) container +docker rm # remove a stopped (exited) container +``` + +## Container troubleshooting + +Lots of diagnostic output: + +``` +docker inspect container_name +``` + +List processes (from outside): + +``` +docker top container_name +``` + +View logs + +``` +docker logs container_name +``` diff --git a/DevOps/Docker/Docker_images.md b/DevOps/Docker/Docker_images.md index 3140fb0..854070a 100644 --- a/DevOps/Docker/Docker_images.md +++ b/DevOps/Docker/Docker_images.md @@ -5,3 +5,92 @@ tags: [docker] --- # Docker images + +Docker needs an image to run a container. +Images are stored in the local registry. If it can't find it, it will try and pull it from Docker Hub. + +## View your local images + +``` +docker images +``` + +```sh +REPOSITORY TAG IMAGE ID CREATED SIZE +debian latest e6d9c9c3e100 12 days ago 118MB +ubuntu latest bab8ce5c00ca 6 weeks ago 69.2MB +cda/mysql-5.7 latest 312ce2a6cea5 3 months ago 495MB +cda/node-16 latest b26e54c8fa11 3 months ago 1.44GB +bitnami/minio latest ef6d14df2158 3 months ago 229MB +opensearchproject/opensearch 1.3.7 0f8ef57c3629 4 months ago 831MB +hello-world latest 46331d942d63 13 months ago 9.14kB +``` + +## Repository, registry, tag + +When you specify the name of an image, you are referring to the image's repository. The registry is the list of repositories available on the Docker Hub. Each repository has a tag, indicating the current production branch. + +For example: + +``` +docker run -it ubuntu:18:04 /bin/bash +``` + +- `ubuntu` is the image repository +- `18.04` is the image tag + +> The Image ID distinguishes the image globally and is not local to you. + +To get detailed info on an image: + +``` +docker image inspect debian +``` + +## "Dangling" images + +A "dangling" image is an unused image - one that is not associated with a container. Re move dangling images with + +``` +docker image prune +``` + +## Anatomy of a Docker image: layering + +The process of constructing a Docker image involves creating a series of intermediate layers, each representing a change or addition to the filesystem. These layers are stacked on top of one another, forming the final image. This approach is called layering. + +Layering in Docker images offers several benefits: + +1. Reusability: Common layers can be shared between multiple images, reducing disk space usage and build time. For example, if multiple images use the same base operating system, they can all share the same base layer. + +2. Caching: During the build process, Docker caches layers that have already been built. If a layer hasn't changed, Docker will reuse it in subsequent builds, speeding up the build process. + +3. Incremental updates: When updating an image, only the layers that have changed need to be updated and redistributed. This reduces the size of updates and makes the process more efficient. + +Each instruction in the Dockerfile creates a new layer in the image. Some common instructions include: + +- `FROM`: Specifies the base image to start with (e.g., an operating system or an existing image). + +- `RUN`: Executes a command, often used for installing packages or configuring the system. + +- `COPY`: Copies files or directories from the host machine to the image. + +- `ADD`: Similar to COPY but can also handle URLs and automatic extraction of archives. + +- `ENV`: Sets environment variables that will be available inside the container. + +- `CMD`: Specifies the default command to run when a container is started from the image. + +- `ENTRYPOINT` : Similar to CMD, but allows the container to be run as an executable, with additional arguments. + +When building an image, Docker starts with the base image specified in the FROM instruction and applies each subsequent instruction, creating a new layer for each step. The final result is a layered image that contains the application and its dependencies, ready to be run as a container. + +### The read-write layer + +The final layer at the "top" of the image is a thin read-write layer. In contrast to the intermediate layers it can execute read-write functionality and is not read-only. + +Files in the intermediate layer cannot be modified, only the read-write layer can be modified. Files are copied over from intermediate layers when requested by the read-write layer, rather being modified themselves. + +Tip for writing build files: + +Things that change less should be higher up (earlier). Things that change more should be twoardds the end. This allows for better layer caching. Particularly bear this in mind with the commands: RUN, COPY, ADD. diff --git a/DevOps/Docker/Docker_notes.md b/DevOps/Docker/Docker_notes.md deleted file mode 100644 index 89f0d34..0000000 --- a/DevOps/Docker/Docker_notes.md +++ /dev/null @@ -1,233 +0,0 @@ -## Basic usage - -docker run hello-world - -First time won't have the image to run container, will fetch from Docker Hub - -See running processes (add `-a` flag for historic): - -``` -docker ps -a -``` - -```` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -64d92f77c12f hello-world "/hello" About a minute ago Exited (0) 19 seconds ago frosty_moser -d6c54f7801fe cda/node-16 "docker-entrypoint.s…" 3 months ago Exited (255) 50 minutes ago 0.0.0.0:3000->3000/tcp, 0.0.0.0:3002->3002/tcp cda-node-16 -ddb15b93b47e cda/mysql-5.7 "docker-entrypoint.s…" 3 months ago Exited (255) 50 minutes ago 33060/tcp, 0.0.0.0:33066->3306/tcp cda-mysql-5-7 -567f35be3762 cda/mysql-5.7 "docker-entrypoint.s…" 3 months ago Exited (255) 50 minutes ago 33060/tcp, 0.0.0.0:33067->3306/tcp cda-mysql-5-7-testing -dc8dca198f07 bitnami/minio:latest "/opt/bitnami/script…" 3 months ago Exited (255) 50 minutes ago 0.0.0.0:9000-9001->9000-9001/tcp cda-minio -9397d4c72679 opensearchproject/opensearch:1.3.7 "./opensearch-docker…" 3 months ago Exited (255) 50 minutes ago 0.0.0.0:9200->9200/tcp, 9300/tcp, 0.0.0.0:9600->9600/tcp, 9650/tcp cda-opensearch -```  - -Each container will be designated with a GUID and a random name for ease of reference -```` - -First lab: - -https://docs.noomedia.com/labs/essential-docker/b/9AA25158-CDC5-412A-9590-2226F4ED81E9/Lab-01.-Introduction-to-Docker-Deskt - -Key learnings: - -Give custom name - -``` -docker run --name my-custom-name hello-world -``` - -Remove all containers - -``` -docker container prune -``` - -A "dangling" image is an unused image - one that is not associated with a container. Remove dangling images with - -``` -docker image prune -``` - -# Unit 3: Containers in depth - -`docker run` implicitly means run within container but you can specify networks and other docker objects. - -``` -docker run debian /bin/echo "Hello, world" -``` - -Creates and runs a container from an image called `debian` and executes /bin/echo inside of it. - -Once you have run this. If successful it will exit. The container will stop running (exit) but the container will still be in memory. - -When `run` is used it grabs the image and constructs and starts a container using it. - -It's actually combining two commands: - -``` -docker create --name my_container hello-world -docker start my_container -``` - -You can also run a container as a daemon. The container will run in the background and detaches it from the console: - -``` -docker run -d debian /bin/sh -c /bin/sh -c "while true; do echo 'Hello!'; sleep 1; done" -``` - -This will log every second - -What will acutally be output is the container ID: - -``` -2749d796cbd64e9cf57307329e792587c39d8244f2377e62d78f3f3f77eecdb4 -``` - -``` -docker log 2749 -hello -hello -hello - -``` - -This will keep running in an infinite loop until: - -``` -docker stop 274 -``` - -There will be a delay because it shuts down gracefully. It sends a SIGINT to the process in the container with PID 1 (i.e the root or parent process for the container) - -stop keeps it in memory so it can be restarted and you can examine the logs. - -if you use `docker rm 274`, it cannot be becovered and the logs are deleted. - -We can tell Docker to immediately remove a container after it exits: - -``` -docker run --rm [container] -``` - -## Container lifecycle - -//TODO : Add diagram of lifecycle. - -## Interacting with containers - -In the examples so far the container is a closed box. You don't have a terminal through which you can interact with the container as you would with an OS. - -For images that have an OS we can use `-i -t` to launch a terminal in interactive mode. - -```sh -docker run -i -t debian /bin/bash -root@0022da12f2f2:/# ls -bin boot dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var -root@0022da12f2f2:/# whoami -root -root@0022da12f2f2:/# ls usr/ -bin games include lib libexec local sbin share src -``` - -Note that it defaults to the root user. - -> Containers are lightweight and should only contain the minium environment needed to run an application. - -Many programs and processes will not be there by default for this reason. Although you could install a package manager and install the commands you need. - -If the container is already running, you can access it via the `exec` command. - -``` -docker exec -i -t my_container bash -``` - -If a container is running in detached mode we can attach to the main process with `docker attach` - -``` -docker attach my_container -``` - -## Container troubleshooting - -Lots of diagnostic output: - -``` -docker inspect container_name -``` - -List processes (from outside): - -``` -docker top container_name -``` - -View logs - -``` -docker logs container_name - -``` - -# Resources - -https://www.taniarascia.com/continuous-integration-pipeline-docker/ - -# Unit 4: Docker images - -Docker needs an image to run a container. -Images are stored in the local registry. If it can't find it, it will try and pull it from Docker Hub. - -List images: - -``` -docker images -``` - -```sh -docker images -REPOSITORY TAG IMAGE ID CREATED SIZE -debian latest e6d9c9c3e100 12 days ago 118MB -ubuntu latest bab8ce5c00ca 6 weeks ago 69.2MB -cda/mysql-5.7 latest 312ce2a6cea5 3 months ago 495MB -cda/node-16 latest b26e54c8fa11 3 months ago 1.44GB -bitnami/minio latest ef6d14df2158 3 months ago 229MB -opensearchproject/opensearch 1.3.7 0f8ef57c3629 4 months ago 831MB -hello-world latest 46331d942d63 13 months ago 9.14kB -``` - -When you specify the name of an image, you are referring to the repository. The registry is the list of repositories available on the Docker Hub. - -Eg: - -``` -docker run -it ubuntu:18:04 /bin/bash -``` - -- `ubuntu` is the image repository -- `18.04` is the image tag - -Without specifying a tag, Docker will default to `latest`. - -Docker images also support different platforms, e.g. `arm64`, `amd64` - -Docker will try to pull the image for your platform. - -Linux-based Intel processors are represented by `amd64` - -Examine image details: - -``` -docker image inspect -``` - -Remove an image: - -``` -docker rmi -``` - -## Anatomy of an image - -An image is not a single file but a series of read-only layers like an onion. -When you pull an image, you will see multiple GUIDs listed in the terminal. Each one of these is a layer. - -Each layer represents an incremental change to the previous layer. The first layer is the base layer and the layers on top are called intermediary layers. Each layer is hashed as a digest. This proves that the layer has not been manipulated. diff --git a/DevOps/Docker/Docker_storage.md b/DevOps/Docker/Docker_storage.md new file mode 100644 index 0000000..7b8c6a4 --- /dev/null +++ b/DevOps/Docker/Docker_storage.md @@ -0,0 +1,58 @@ +# Docker storage + +The following problems related to storage face containerized environments: + +- Container storage is ephemeral. When the container is removed the read-write layer is removed. How can we persist data? + +- Container storage is isolated: containers cannot access each other's file systems. If they need to share data, this could be a problem + +### Docker volumes + +Docker volumes are the solution to this. Volumes: + +- store data on the host file system (i.e. outside the container) +- persist data beyond the lifecycle of the container + +> A Docker volume is a logical volume that exists outside the container file +> system. It maps to a directory on the host and this location is managed by Docker - not much control over this. + +Can be named or anonymous. Anonymous vols are created by Docker as needed. Named come from the user. + +Volumes can be **named** or **anonymous**. Anonymous volumes are created by Docker as needed. Named volumes are named by the user. In both cases Docker decides where to create the volume. + +### Creating a volume + +```sh +docker run -d -rm -v /usr/share/nginx/html nginx:latest +``` + +The `-v` flag indicates volume. The file reference is the mount point _within_ the container. This is how the persistent storage it will appear. The _actual_ volume exists outside of the container anonymously in a place decided by Docker. + +To create a named volume you first have to create: + +``` +docker volume create volume_name +``` + +Then you can designate it: + +``` +docker run -d -rm -v volume_name:/usr/share/nginx/html nginx:latest +``` + +The actual location is still hidden and decided by Docker. + +As these are anonymous they will show ups as GUIDs in the volume viewer in Docker Desktop: + +// TODO: add screenshot + +### Commands + +``` +docker volume ls +docker volume rm +docker volume inspect +docker volume prune // remove all unused volumes +``` + +Inspect is only really informative if you are on Linux because it will show you the actual physical path. With Windows and Mac the referernce is to a virtual machine. diff --git a/Programming_Languages/YAML/Guide_to_YAML.md b/Programming_Languages/YAML/Guide_to_YAML.md new file mode 100644 index 0000000..0c67631 --- /dev/null +++ b/Programming_Languages/YAML/Guide_to_YAML.md @@ -0,0 +1,79 @@ +--- +categories: + - Programming_Languages +tags: [] +--- + +# Guide to YAML + +- Data description language +- Alternative to XML and JSON +- Easy to read by humans and process by code + +## Main rule + +> Never use tabs for indentation, use two spaces + +## Available data types + +- strings +- numbers +- booleans +- arrays +- maps (key, value pairs) + +### Example + +````yaml +# Basic scalar data types +age: 51 +greeting: Hello, world +length: 182.3 +happy: true +`` + +```yaml +# array +pets: + - cat + - dog + +# nested array +pets: + - cat + - dog + - staffy + - pitty +```` + +```yaml +# map +languages: + - java: advanced + - python: average + - javascript: basic +``` + +### Strings + +- Don't have to use quotes unless string contains reserved YAML characters (dashes, colons etc) +- Can use single or double quotes + +### Multi-line text + +Two methods: + +```yaml +# Literal block preserves new line characters +literal_block: | + Lorem ipsum dolar sit avcc lorem dolar + Blah blah + Blah +``` + +```yaml +folded_block: > + Each different line + will be folded back on itself so you don't + have to keep scrolling to the right +``` diff --git a/README.md b/README.md index 0b35bae..abc09ab 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,4 @@ This repository contains notes from my autodidactic study of software engineering and computer science. -![](/_img/Written-By-Human-Not-By-AI-Badge-white.svg) - diff --git a/_img/container-commands.png b/_img/container-commands.png new file mode 100644 index 0000000..5874b1e Binary files /dev/null and b/_img/container-commands.png differ diff --git a/_img/container-lifecycle.png b/_img/container-lifecycle.png new file mode 100644 index 0000000..c3f98e0 Binary files /dev/null and b/_img/container-lifecycle.png differ diff --git a/_img/docker-lifecycle.png b/_img/docker-lifecycle.png new file mode 100644 index 0000000..ec89726 Binary files /dev/null and b/_img/docker-lifecycle.png differ diff --git a/_img/docker_volumes.png b/_img/docker_volumes.png new file mode 100644 index 0000000..d49a73f Binary files /dev/null and b/_img/docker_volumes.png differ