CS 470 Grossmont College Unix Linux Sysadmin Ubuntu Docker & NGINX Lab
CS 470: Unix/Linux SysadminSummer 2021 Lab 4
Ubuntu + Docker + NGINX
This lab gets our hands dirty with the other and far bigger super-major Linux distribution, Ubuntu. With the
missteps recently made by Red Hat and discussed in lab three, expect that gap to grow. Ubuntu is clearly
ready to capitalize; the screenshot below is a link from the main page of ubuntu.com.
We often call Linux operating systems “distributions” because, like we’ve discussed in lecture … Linux isn’t an
operating system. Linux is a kernel, and a collection of tools directly around, and related to … the kernel.
Everything else that a vendor puts around it, that makes it a whole operating system … is not Linux. So we end
up with these distributions, these piles of software, built around the Linux kernel, organized by vendors, that
become the value we expect of an operating system. Red Hat Linux, Ubuntu Linux … hopefully you get the
idea.
Ubuntu Linux is based, in turn, upon Debian Linux (made by Deb and Ian). Debian-based Linux distributions
tend to use the “.deb” package format and the apt package manager, very analogous to the situation with
“.rpm” packages and the yum or dnf package managers on Rocky Linux and Red Hat-based distributions, of
which there are several, as discussed at the beginning of lab 3.
Other Debian-based distributions include Kali Linux, which some of you have already used in CS 574, and
ParrotOS, another security distribution I’m starting to like better than Kali. Ubuntu is so popular that many
popular Debian derivatives are based directly on Ubuntu: Linux Mint (used in prior iterations of this class),
Kubuntu (Ubuntu with the KDE desktop instead of GNOME), Lubuntu (LXDE/LXQT desktop), Xubuntu (with
Xfce), and Ubuntu Budgie (with the Budgie desktop).
Most of the people who use Ubuntu came across it because up until recently, Ubuntu Desktop definitely used
to have the best-looking desktop Linux … the one that was closest to a Linux you could give your grandparents
to use. Ubuntu used to develop its own desktop environment, called “Unity,” but has recently dropped it in
favor of making its own additions to the community-developed GNOME desktop environment. I
wholeheartedly suggest you try Ubuntu Desktop on your own time later … but for this lab, we’ll be using
Ubuntu Server, with no graphical desktop like our minimal Rocky install, to conserve resources.
part 0: get it
Go to www.ubuntu.com, click the download menu along the top bar, Ubuntu Server, and on the overengineered process wrought on by the first server download page, choose “manual server installation.” On
the following page, hit the green button to Ubuntu Server 20.04.2 LTS. The download is just over 1 GB. Note:
LTS stands for “long-term” support, which means that among major releases of the operating system, an LTS
release targets stability, and Ubuntu will sell support for this release for up to ten years, depending upon how
you pay for it.
Unlike Red Hat, there is no free similar distribution (like CentOS was and Rocky now is for Red Hat) for free
patches … because patches for Ubuntu are just provided for free; Canonical, the company behind Ubuntu
simply chooses not to tie patches and updates to paid support … Red Hat wants you to pay for that.
1
part 1: install it
Create a new custom VM on your shared VM network, using the following specifications:
CPU: single virtual core/processor
RAM: 1 GB (1024 MB)
hard disk: 20 GB
guest OS: Ubuntu 64-bit
Please de-select “Easy Install.” The installation of Linux, and Ubuntu especially, are easy enough, and we don’t
want to hide from the details here … especially when VMware’s “Easy Install” just seems to make things hard.
When the installer window comes up, if you’ve done Debian Linux before, you might recognize similarities.
Ubuntu’s installer is similar in design and spirit to FreeBSD’s text-drawn menus installer. Just like there, you’ll
be using the space bar and return keys to select things, and the arrow keys and tab to navigate between textbased UI elements.
1. Select your language.
2. After selecting the language, the Ubuntu installer offered to update itself. It’s worth noting that while
Ubuntu’s distributions are generally pretty good, their installer can be hideously unstable … which is
why they’ve developed a post-boot on-line installer update feature.
This installer update feature has almost always led to a crash for me, but I’m going to give it another
chance. If it fails for you, just reboot from the installation media (the ISO) and try the installation
again, without letting it update.
2
If you don’t see this screen or one like it, don’t worry. You can just skip this step.
3. Select your keyboard layout.
4. Network setup is next; just like with the last two OSs, we’re going to set up a static IP right here in the
installer. Choose your VM’s only network interface with a return, then choose to edit IPv4, and change
from automatic configuration to manual.
For the subnet, the installer expects the network address of your VMware NAT subnet, in CIDR
notation. This is the .0 reserved “network” address on that subnet, followed by “/24,” relating the
netmask and thus the size of the subnet. Mine is 192.168.223.0/24.
For address, remember your Ubuntu system is .74 on your VMware subnet. The gateway is .2, as
usual. For the name server, provide the IP address of your OpenBSD VM, and cs470.local as the only
search domain.
5. Proxy? We don’t need no stinkin’ proxy. You’re directly connected to the internet. Leave this one
blank.
6. On the mirror selection screen, the default archive mirror is fine, whichever mirror it provides.
7. On the storage configuration screens, starting by selecting “custom storage layout.”
As I said before with Rocky, avoid LVM like the plague unless you’re on a massive piece of physical
hardware where you can actually use this.
The only disk option you should have under “available devices” should be /dev/sda … note that in
Linux, sd is the disk device type, a means the first disk, b the second and so on, and your partitions, as
in Rocky, will be numbered.
3
Select /dev/sda, and first, choose to use it as the boot device. Select it again, and choose to add a GPT
partition. Make a 18 GB root filesystem, with the ext4 filesystem. Completing the setup of the root
filesystem by selecting “create” will return you to the screen where you selected to create the root
filesystem on /dev/sda. It should show that just under 2 GB remain available (mine said 1.997G).
Select /dev/sda again, and create a swap petition with all remaining space.
Your disk setup should look like the next screenshot … once you’re content with your disk setup,
choose “done.” It’s going to ask you to confirm whether you want to take destructive actions …
remember that your VM’s hard disk is just a virtual disk in a file on your computer. “Continue” without
hesitation.
4
The Ubuntu installer will then go to the “profile setup” screen, where it’ll ask you to set up a user. Use
whatever you like for your own name, user “ubuntu” as your server’s name, continue to use the same
username you’ve been using for the non-root user on the rest of your VMs, and choose a good
password …
… and after you hit “done” on this screen, DO select to install OpenSSH server on the “SSH Setup”
screen. Do NOT select to import an SSH identity. After you hit “done” on the SSH setup screen, you
get “featured server snaps.” Take a pass, go straight to “done.”
Hitting done starts the installation, and the top of the screen said “install complete!” but the installer
was clearly still runing “curtin hook” to finalize the installation … hmmmm. I do not think it means
what they think it means!
5
After a couple minutes, I got the option to “reboot now,” and I took it … you should too.
The installer, as a parting gift, asked me to remove the installation medium (again, this is the .ISO file in
the VM’s virtual optical drive that it is referring to) and press the enter key to reboot. Make sure the
ISO file is “disconnected” from the virtual optical drive in your VM, and hit return. This is one of the
few changes that you can make to the VM while it’s turned on … because we’re just ejecting a CD-ROM
from the computer.
8. After the reboot, you’ll get a login prompt and if you let it sit for a minute, the login prompt will be
followed and chased off screen by a bunch of final, first-boot setup stuff, like generating a host key pair
for the VM’s SSH service.
if you hit the carriage return key after it stops doing things – my VM stopped after it “reached target
cloud-init target,” you’ll get a fresh login prompt.
6
Note that you set no root password … on Ubuntu, like on macOS, and a lot of OSs these days, the root
account is typically locked as a consquence of not setting a password … no password will work to log
you in. You are expected to use sudo whenever you use your admin mojo here.
part 2: configuration and additional installations
9. In case your installer failed to configure the network properly, I’m leaving this in here.
If you are able to ping your Ubuntu VM, SSH into it, and look up DNS names on the internet with the
host command, you can skip ahead to step #10. Just in case you can’t, because failures to set up the
network were that frequent last year, I’m leaving this step in here. You might also just want to know
how the network is configured statically in Ubuntu … it’s fairly strange.
‼ YES, THE INSTALLER IN UBUNTU USED TO SUCK THIS BAD DURING THE TRANSITION TO NETPLAN!
If your Ubuntu VM’s network works properly after the installer, you can skip ahead to #10.
If you need to configure Ubuntu’s networking manually, you need to do all this network configuration
on the VM’s console. Typically, services aren’t watching their configuration files and automatically
adapting the configuration. cloud-init, however, is a rare exception to this rule … once you remove
the network configuration (the file 50-cloud-init.yaml) below, network connectivity to your Ubuntu
VM will immediately be broken, and you will not be able to SSH into it until you get the network back
up again.
Ubuntu 16.04 used the file /etc/network/interfaces to configure its network(s) … but that file, and
its backing subsystems, have been deprecated as of Ubuntu 18.04 in favor of the super-duper megaoverengineered netplan. netplan stores its configurations in the folder /etc/netplan, and because
the Ubuntu installer detected we were running under VMware, it bundled the network configuration
along with – and named it after – a common suite of cloud instance initialization scripts, cloud-init.
cloud-init, like a lot of things in life, is really intrusive, under the guise of trying to be helpful. Let’s
get it out of the network configuration business.
$ sudo apt remove ifupdown
This command removes the old network configuration subsystem from Ubuntu 16.04 and prior
versions.
$ sudo vi /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
We’re creating the file in the above command, to tell cloud-init not to configure the network, we
put this line in it:
network: {config: disabled}
Remove the old network configuration that leaned on cloud-init …
$ sudo rm /etc/netplan/50-cloud-init.yaml
7
… and create your own, new network configuration file:
$ sudo vi /etc/netplan/01-vmnet.yaml
Give it the following contents … and check your network adapter’s name with the command ip a
(mine was ens33 in the sample below) before you do …
network:
version: 2
renderer: networkd
ethernets:
ens33:
dhcp4: no
addresses: [192.168.223.74/24]
gateway4: 192.168.223.2
nameservers:
search: [cs470.local]
addresses: [192.168.223.71]
Once you’ve looked it over a couple times for typos, save it out, and tell netplan to generate the
necessary configuration …
$ sudo netplan generate
… and then apply it.
$ sudo netplan apply
Linux distributions, increasingly, are moving away from the ifconfig command towards the ip
command. To check out your network configuration, try the command ip a … if you can’t ping your
gateway, you got one of the IP addresses wrong. If you can’t ping outside your gateway (say 4.2.2.2),
you got your gateway IP wrong, probably. If you can’t look up names, or ping you probably got your
nameserver wrong, or it’s not working correctly.
10. Ordinarily, we’d set up SSH here, but we’re going to set up the NFS client on this VM instead, and get
our SSH directory – and the entirety of our /home directories from the NFS server on the Rocky VM. In
order to do that, we need to initialize Ubuntu’s package management system, apt:
$ sudo apt update
You should see apt go out and grab the lists of the latest packages from Ubuntu’s repository, probably
the same “repo” as set up in number six above, and finish with something like …
63 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
… only we don’t care about that, we want NFS first.
8
$ sudo apt install nfs-common
apt will tell you that it’s going to download and install so much software, and ask for your confirmation
to proceed. After it’s done installing, you should be able to mount the NFS share of /home from your
Rocky VM. First, cd out of /home …
$ cd /
… and then manually mount it …
$ sudo mount rocky:/home /home
… running the mount and/or df commands should confirm that it’s properly mounted. If it’s not, you
goofed something up above this, and should backtrack.
11. You should also now be able to SSH into your Ubuntu VM without using a password, using key-based
authentication, as your public key is in place in ~/.ssh/authorized_keys thanks to the NFS mount.
Test it. Log into your Ubuntu VM over SSH … you may have to accept its SSH host key, as it is a new
SSH server to your client, but it should not ask for a password.
Pretty cool, huh? Since /home is now shared with the other two computers, your SSH public keyring is
already there.
12. Having the same home directory everywhere is badass; let’s make this NFS mount permanent. Use vi
to add the following line to /etc/fstab on your Ubuntu VM:
rocky:/home /home nfs defaults 0 0
Check twice for typos – remember, a problem mounting any filesystem listed in /etc/fstab will cause
your system to fail to fully boot – and test it out by rebooting your VM.
13. Once you get your VM back from reboot, let’s take a couple minutes to install packages used later by
your instructor for grading purposes: csh (via tcsh), GNU binutils, and net-tools.
$ sudo apt install tcsh binutils net-tools
14. It’s time for network time!
$ sudo apt install ntp ntpdate
Enable NTP so that it will start up when you next boot your Ubuntu VM …
$ sudo systemctl enable ntp.service
… fire it up immediately …
$ sudo systemctl start ntp.service
9
… and verify it’s running.
$ ps auxww | grep ntpd | grep -v grep
The -v switch with grep excludes, instead of matches lines, so we’re making sure we only see a line if
there’s an ntpd process, not a line for our grep process trying to find ntpd.
15. Mail and mail forwarding next … Ubuntu 20.04 appears to come with no built-in mail subsystem.
$ which mail
This command returned no output.
$ which sendmail
This command returned no output … and most mail servers offer a sendmail command, for historical
reasons.
$ apt list –installed | grep -i mail
This command also returned no useful output, just a warning about how apt’s command line interface
(CLI) is in flux, and to be careful using it in scripts. So, let’s install postfix; it’s way easier to configure
than sendmail.
$ sudo apt install postfix
After confirming you want to install postfix, you’ll be greeted with this FreeBSD-looking dialog.
We’re going to choose “no configuration,” and then set it up manually, because none of the options
match our use case for mail on our Ubuntu VM. We want mail to be sent via SMTP, but not to be
received, and we want to exercise our DNS and network configuration to get mail to its destination.
10
In the middle of the output that follows, you should see something like this …
Preparing to unpack …/postfix_3.4.13-0ubuntu1_amd64.deb …
Unpacking postfix (3.4.13-0ubuntu1) …
Setting up ssl-cert (1.0.39) …
Setting up postfix (3.4.13-0ubuntu1) …
Adding group `postfix’ (GID 119) …
Done.
Adding system user `postfix’ (UID 116) …
Adding new user `postfix’ (UID 116) with group `postfix’ …
Not creating home directory `/var/spool/postfix’.
Creating /etc/postfix/dynamicmaps.cf
Adding group `postdrop’ (GID 120) …
Done.
/etc/aliases does not exist, creating it.
Postfix (main.cf) was not set up. Start with
cp /usr/share/postfix/main.cf.debian /etc/postfix/main.cf
. If you need to make changes, edit /etc/postfix/main.cf (and others) as
needed. To view Postfix configuration values, see postconf(1).
After modifying main.cf, be sure to run ‘systemctl reload postfix’.
Created symlink /etc/systemd/system/multi-user.target.wants/postfix.service →
/lib/systemd/system/postfix.service.
Processing triggers for ufw (0.36-6) …
Processing triggers for systemd (245.4-4ubuntu3.11) …
The package installation script, as you see, makes a new separate user, called postfix, to run the
postfix mail server as
As it tells us, let’s create the postfix configuration file …
$ sudo cp -p /etc/postfix/main.cf.proto /etc/postfix/main.cf
… we’re not using the main.cf sample under /usr/share/postfix because it’s really thin (take a look
at it). As with our Rocky VM, let’s edit the configuration file …
$ sudo vi /etc/postfix/main.cf
… and set the following values under the areas for each variable’s sample. As you’ll note going through
main.cf it helpfully provides its own documentation. In files like this, it just saves you time if you put
the value right there next to the explanation for each configuration item. Your fingers and eyeballs
need move nowhere to know what you’re doing.
myhostname = ubuntu.cs470.local
mydomain = cs470.local
11
myorigin = $myhostname
sendmail_path = /usr/sbin/sendmail
newaliases_path = /usr/bin/newaliases
mailq_path = /usr/bin/mailq
setgid_group = postdrop
manpage_directory = /usr/share/man
sample_directory = /usr/share/postfix
inet_protocols = ipv4
… this last one was already set in the file I got from the package, and recommended that you copy
earlier in this step. A lot of these settings were things I felt the postfix package should have filled out
for us, since they’re just aiming at other pieces of postfix … but this is probably explained by our
choice of “no configuration” at that menu of choices apt threw us. The options sample_directory,
readme_directory, and html_directory should be commented out (add a # at the start of these lines to
turn them into comments). Once you’re done, save out the file, and tell postfix to start up.
$ sudo postfix start
You might get a warning about a symbolic link leaving /etc/postfix … you may safely ignore it. Now
we need to edit /etc/aliases. Add the following line to forward root’s mail …
root: peter@cs470.local
… and of course, replace my username with yours, and run newaliases to tell the mail subsystem to
re-process the aliases database.
$ sudo newaliases
Now let’s test our mail configuration. To do that, we need a command-line mailer, but …
$ which mail
… returns no output. So let’s install apt-file to search the apt database for specific commands …
$ sudo apt install apt-file
… have apt-file built its database …
$ sudo apt-file update
12
… and finally, search for mail or mailx …
$ apt-file search mail
Wow, this command returns a lot of output. Let’s have it refine its output a bit by piping it through
grep …
$ apt-file search mail | grep -w mail
… grep with the -w switch only matches output where mail is the whole word in each match, not a part
of a larger word. Still not helpful. Maybe matching mail with a space after it? Maybe going straight
for mailx to reduce some noise here?
$ apt-file search mailx
bsd-mailx: /usr/bin/bsd-mailx
bsd-mailx: /usr/share/bsd-mailx/mail.help
bsd-mailx: /usr/share/bsd-mailx/mail.tildehelp
bsd-mailx: /usr/share/doc/bsd-mailx/NEWS.Debian.gz
bsd-mailx: /usr/share/doc/bsd-mailx/README.Debian.gz
bsd-mailx: /usr/share/doc/bsd-mailx/changelog.Debian.gz
bsd-mailx: /usr/share/doc/bsd-mailx/copyright
bsd-mailx: /usr/share/man/man1/bsd-mailx.1.gz
mailutils-mh: /usr/share/mailutils/mh/scan.mailx
manpages-pl: /usr/share/man/pl/man1/bsd-mailx.1.gz
manpages-pl: /usr/share/man/pl/man1/mailx.1.gz
manpages-posix: /usr/share/man/man1/mailx.1posix.gz
mmh: /etc/mmh/scan.mailx
mon: /usr/lib/mon/alert.d/mailxmpp.alert
nmh: /etc/nmh/scan.mailx
Bingo, much better. Looks like bsd-mailx is the droid we’re looking for.
$ sudo apt install bsd-mailx
16. Finally, we have something to test with …
$ echo ‘test’ | mail -s test root
… and checking the mail log …
$ sudo tail /var/log/mail.log
… I saw this …
Jul 25 01:04:56 ubuntu postfix/cleanup[34158]: 269AC61187: messageid=
Jul 25 01:04:56 ubuntu postfix/qmgr[32766]: 269AC61187:
from=, size=553, nrcpt=1 (queue active)
13
Jul 25 01:04:56 ubuntu postfix/local[34160]: 1F84E61186:
to=, orig_to=, relay=local, delay=0.04,
delays=0.02/0.02/0/0, dsn=2.0.0, status=sent (forwarded as 269AC61187)
Jul 25 01:04:56 ubuntu postfix/qmgr[32766]: 1F84E61186: removed
Jul 25 01:04:56 ubuntu postfix/smtp[34161]: fatal: unknown service: smtp/tcp
Jul 25 01:04:57 ubuntu postfix/qmgr[32766]: warning: private/smtp socket: malformed
response
Jul 25 01:04:57 ubuntu postfix/qmgr[32766]: warning: transport smtp failure — see a
previous warning/fatal/panic logfile record for the problem description
Jul 25 01:04:57 ubuntu postfix/master[32764]: warning: process
/usr/lib/postfix/sbin/smtp pid 34161 exit status 1
Jul 25 01:04:57 ubuntu postfix/master[32764]: warning: /usr/lib/postfix/sbin/smtp:
bad command startup — throttling
Jul 25 01:04:57 ubuntu postfix/error[34162]: 269AC61187: to=,
orig_to=, relay=none, delay=1, delays=0/1/0/0.01, dsn=4.3.0, status=deferred
(unknown mail transport error)
Note, the proverbial first error here is “fatal: unknown service.” Turns out postfix wants to reference
a service-to-port database, like we’ve seen in /etc/services, but is expecting to find it in its queue
directory. I found this out, of course, from a web search … but it was sufficiently painful to find that
I’m going to hand you this one … but walk you through what I had to go through to troubleshoot it,
because it showed use of some core Unix concepts, and a very important and common way of fixing
SMTP delivery issues with e-mail.
Also note: in the last line, postfix says the mail’s status is “deferred.” This means postfix has hung
onto the mail, considers this problem temporary, and will try to deliver it again once the problem goes
away.
So, let’s aim it at the services database by hard link, to track the data in the “official” copy of the
services database there, and to save you from discovering, like I did, that postfix is running in a
chroot environment like we played with in lab one …
$ sudo ln /etc/services /var/spool/postfix/etc/services
… note that we are using a hard link here, because a soft link wouldn’t and instruct postfix to run
through its queue again …
$ sudo postfix flush
… looking at /var/log/mail.log again …
Jul 25 01:07:25 ubuntu postfix/qmgr[32766]: 269AC61187:
from=, size=553, nrcpt=1 (queue active)
Jul 25 01:07:25 ubuntu postfix/smtp[34239]: 269AC61187: to=,
orig_to=, relay=none, delay=149, delays=149/0.02/0/0, dsn=4.4.3,
status=deferred (Host or domain name not found. Name service error for
name=cs470.local type=MX: Host not found, try again)
14
Could it be because our the chroot postfix is running inside has no resolver?
$ ls -l /var/spool/postfix/etc/resolv.conf
lrwxrwxrwx 2 root root 39 Apr 23 07:33 /var/spool/postfix/etc/resolv.conf ->
../run/systemd/resolve/stub-resolv.conf
There’s a symbolic link there, but is it aiming at anything? We can check this out by using the -L option
with ls, which tells it to follow symlinks.
‼ IMPORTANT NOTE: if you don’t have the symbolic link above, don’t worry … just keep going. Either
way, we’re going to fix the root problem here.
$ ls -lL /var/spool/postfix/etc/resolv.conf
ls: cannot access ‘/var/spool/postfix/etc/resolv.conf’: No such file or directory
So the symbolic link is broken … it looks like it was trying to aim at /var/run/systemd/resolve/stubresolv.conf, which exists … note that /var/run is a symlink to /run in Ubuntu, which is on a
ramdisk (see “tmpfs” if you run df), so a hard link is not possible here. Let’s copy that file instead …
$ sudo rm -f /var/spool/postfix/etc/resolv.conf
$ sudo cp /run/systemd/resolve/stub-resolv.conf /var/spool/postfix/etc/resolv.conf
Restarted postfix just in case, and told it to re-run its queue … much better now.
Jul 25 01:37:45 ubuntu postfix/smtp[35042]: 269AC61187: to=,
orig_to=, relay=freebsd.cs470.local[192.168.223.72]:25, delay=1969,
delays=1969/0.01/0.01/0.01, dsn=2.0.0, status=sent (250 2.0.0 16P1bjEi008998 Message
accepted for delivery)
On to the next thing.
17. Read but do not do this step.
!! READ BUT DO NOT DO THIS STEP, UNTIL THE START OF STEP 18. Step 17 here is only here for your
reading pleasure, to share past lessons learned in case you truly get stuck, to talk about SMTP smart
host, and to illustrate a basic sysadmin concept and vocabulary word: “kludge.”
In prior years, I was unable to get postfix to properly resolve the MX record to cleanly deliver
diagnostic e-mails between VMs, and we had to resort to a “kludge.”
Sometimes you need to send e-mail to the rest of the internet via another system, because network
policy simply won’t let you. There is no such network policy here … I just couldn’t get postfix to
behave. So sometimes, using an SMTP “smart host” isn’t a kludge, but in our case, it was.
15
Just in case you hadn’t been exposed to the word “kludge” before … it’s a sub-optimal solution. Not
the solution we want to roll out, but if holds things together until the end of the business day on
Friday, we can fix it later, The Right Way™.
We’re going to set up our FreeBSD VM as a “smart host.” This means that our local host (in this case,
our Ubuntu VM) doesn’t know how to deliver mail, so it’s going to use a system smarter than it … in
this case, our FreeBSD VM, where we want the mail to go, anyways.
$ sudo vi /etc/postfix/main.cf
Under the section covering the option “relayhost,” add the line …
relayhost = [freebsd.cs470.local]
… and, after saving main.cf, tell postfix to reload its configuration …
$ sudo postfix reload
18. GnuPG. Fortunately, this is going to go way quicker than mail …
$ which gpg
/usr/bin/gpg
… because gpg is already installed. Wahoo, shortest step in a lab, EVAR.
19. Updates. Let’s set up root’s crontab to check for updates every night at midnight …
$ sudo crontab -e
… and add the following line …
0 0 * * * apt update && apt list –upgradable
16
… I’m hoping you can tell what this does, at this point.
part 3: web server containerized with Docker
In order to spare you the mundane ease with which one can typically set up a web server today, we’re going
to take this opportunity to introduce a both a layer of abstraction and a new technology to you, Docker.
The idea behind docker is a simple but powerful one: virtual machines are great for a lot of things, but they
can be very wasteful. In setting up a virtualized server instance, we typically install a whole copy of Windows
or Linux each time, at a cost of anywhere 2 to 10 GB of storage for each operating system instance, and then
consume the memory (RAM) involved with running that additional operating system instance for each virtual
machine, and additional memory and CPU are consumed emulating virtual hardware.
Docker calls its virtualization units “containers,” and uses a very minimalist approach, bordering on extreme
para-virtualization. Containers share services and resources with a host operating system wherever possible,
and reduces the footprint inside each container’s filesystems to only the bare libraries and files required by a
particular running service. Wherever it makes sense, files or data to be served by containers are mounted into
each container from the host computer’s filesystems.
The result is profound; though we lose a lot of the logical separation provided by full-blown virtual machines,
but we get tremendous resources savings, because all of our service units are just the size of the service, with
a thinner layer of separation. If we need to add in common data files, we can “map” folders with common
service data, into multiple containers if needed, and either ramp up a fleet to scale, or just provide a thin
logical layer of separation and abstractions between separate services in our server fleet.
Docker gives you a great, economical way to cloud-host a lean-as-possible service instance, and to separate
the services within that system from one another, by more than just filesystem access controls.
Let’s jump right in.
20. First, let’s install docker and its container authoring tool, docker-compose. Let’s also grab nginx, the
web server we’re setting up, in case we want any of its support files.
$ sudo apt install docker docker-compose
Note that as a part of the truncated output below, docker is being set up with a sandbox group
account, and is being registered with systemd as a service.
Setting up docker.io (20.10.2-0ubuntu1~20.04.2) …
Adding group `docker’ (GID 121) …
Done.
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service →
/lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket →
/lib/systemd/system/docker.socket.
Setting up dnsmasq-base (2.80-1.1ubuntu1.4) …
Setting up libtiff5:amd64 (4.1.0+git191117-2ubuntu0.20.04.1) …
Setting up libfontconfig1:amd64 (2.13.1-2ubuntu3) …
17
Setting up ubuntu-fan (0.12.13) …
Created symlink /etc/systemd/system/multi-user.target.wants/ubuntu-fan.service →
/lib/systemd/system/ubuntu-fan.service.
Setting up docker-compose (1.25.0-1) …
docker itself can be used to grab stock containers from Docker Hub (https://hub.docker.com), and
from arbitrary repositories. Our target web server for this lab, nginx, is available as a container from
the default repository. Let’s grab it.
$ sudo docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
33847f680f63: Pull complete
dbb907d5159d: Pull complete
8a268f30c42a: Pull complete
b10cf527a02d: Pull complete
c90b090c213b: Pull complete
1f41b2f2bf94: Pull complete
Digest: sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
21. Let’s fire up the container. In the command below, -p 80:80 tells docker to map port 80 of our host
system (the Ubuntu VM) to port 80 inside the container. This allows the nginx service inside the
container to answer HTTP requests (TCP port 80) destined for our Ubuntu VM’s primary network
interface and IP address.
$ sudo docker run -p 80:80 -d nginx
If you (and I) did everything correctly to this point, the docker command will return a long hex
“container ID” for the container you just started up, and you should be able to go to the following URL
…
http://ubuntu.cs470.local/
… and see a webpage welcoming you to the nginx web server.
Note: you kinda, sorta have a VM inside a VM now. Pretty cool, huh?
22. In order to see all the containers you have running, the docker command has a complete syntax of
subcommands underneath it, and docker ps will, like the command ps, show you a list of running
things.
$ sudo docker ps
CONTAINER ID
STATUS
69ef9dece01a
Up 6 seconds
IMAGE
PORTS
nginx
0.0.0.0:80->80/tcp
COMMAND
NAMES
“/docker-entrypoint.…”
modest_ishizaka
CREATED
7 seconds ago
18
Note that docker ps gives you not only the important handle for the container, in a shorter form of its
container ID, but shows ports mapped into the containers.
23. Let’s stop the container now, so that we can do some reconfiguration …
$ sudo docker stop 69ef9dece01a
… it will echo back the container ID, presumably because you can provide the docker command
multiple container IDs on a single line, if you’re doing the same operation (stop/start/etc.).
Also note that now, if you reload that page in your browser, it will time out. The web service is no
longer there, not answering anymore.
24. Let’s mount a common data directory, inside the container. In lab three, we created /home/tmp with
wide-open permissions so that any user on any of our NFS clients (every VM except our OpenBSD
name server) could use the disk space on our file server. Now, let’s use nginx to share up the content
in that directory.
First I used the find command to find nginx.conf files in /var/lib/docker because that’s where
docker stores downloaded containers …
$ cd / && sudo find /var/lib/docker -iname “nginx.conf”
This showed me two files, both under /var/lib/docker/overlay2/*/*/etc/nginx/conf.d. You
might have more … and that’s okay. Remember, I promised to use the asterisk as a wildcard … keep in
mind: that’s a wildcard. Now let’s replicate the whole nginx container configuration under
/usr/local/etc/nginx …
$ sudo cp -pr /var/lib/docker/overlay2/*/*/etc/nginx /usr/local/etc/nginx
… remember, replace my asterisks with the pieces of your actual pathname … next, I edited the
duplicate I just placed in /usr/local/etc/nginx/conf.d/default.conf and added the option to
autoindex folders to the root location of the web server and to give them pretty download links with
by HTML-formatting the folder for our web browser …
19
Now we’re going to re-start our docker container with our /home/tmp in place of the webroot, and our
/usr/local/etc/nginx in place of the container’s default nginx configuration folder.
$ sudo docker run -p 80:80 -v /home/tmp:/usr/share/nginx/html:ro -v
/usr/local/etc/nginx:/etc/nginx:ro -d nginx
Note that the above command is a single line, wrapped by Microsoft Word over here.
If you did everything correctly, you should now be able to reload the website in your browser, and see
the list of files in /home/tmp … listed in your browser. Try making files in /home/tmp and refreshing
your web browser.
We’ll be using this again shortly, but pretty cool … we’re using Docker on Ubuntu and nginx to serve
up files from our Rocky VM to web browser clients. We’re pulling it all together. Pat yourself on the
back.
25. Now let’s make sure our docker container starts when our Ubuntu VM starts. Use docker ps to get
the container ID, then run the following commands:
$ sudo systemctl enable docker
$ sudo systemctl start docker
$ sudo docker update –restart=always c482f6926002
This is not the best way, as I read it … we should likely be using docker service, but reboot to test it
… it worked for me. Extra credit for the first person to give me a proper docker service configuration
with all the right options for our container.
26. Wouldn’t it be nice (https://www.youtube.com/watch?v=lD4sxxoJGkA) if there was a way to easily
manage all the containers you install for docker? We’re going to install an open-source container
manager called portainer that allows you to do just that.
20
First, let’s create a volume for portainer to separately store persistent state data …
$ sudo docker volume create portainer_data
… and then tell docker to run portainer. What?! It’s not been downloaded yet …
$ sudo docker run -d -p 8000:8000 -p 9000:9000 –name=portainer –restart=always -v
/var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data
portainer/portainer-ce
… and of course, docker has an answer for that. Realizing it doesn’t have portainer, it just pulls it.
27. In a web browser on your host operating system, go to http://ubuntu.cs470.local:9000/ and you
should be prompted to choose a username and password for the admin user. Note that we’re using
port 9000 for this web service because the standard HTTP port 80 is taken by nginx.
28. Select Docker as your container environment, hit connect, and you’re good to go …
21
… with an interface that looks like this.
The problem that held up this very lab here from being published was my falling into this fringe case https://github.com/portainer/portainer/issues/2434 – for which no reliable solution has been
proposed online … the person who reported it, reports later in this thread that it just … started
working at some point, but he didn’t know why.
For those of you paying attention, it’s a converse case of what happened, but the same outcome … had
this person known what the issue was, I might have too. So when you face a problem with your
network, don’t remove the NIC, don’t reinstall your system … figure out what you missed or what’s not
working correctly. You learn more this way, and you tend to work less in most cases. Work smarter,
not harder, so they say …
29. You can do a lot in this web UI. You can add images, create new containers, stop or resume containers,
look at container logs, and much more. This can be pretty useful when you have a lot of containers
running.
I wanted to do a little more here, but want to get this lab into your hands already, so more here later.
It’s worth noting that docker-compose can be used in its own right, to build custom containers with a
fine level of attention to detail. I was initially going to base this part of the exercise on this walkthrough …
https://medium.com/@lukasoppermann/setting-up-let-s-encrypt-nginx-on-ubuntu-16-04-with-docker482808d4b0ea
… but again, just wanted to turn this lab over, and show you the door to docker, so to speak, if you
want to walk through it some more. Mission accomplished. Please also find a great one-over of
docker commands at the following URL:
https://tekraze.com/2020/05/common-docker-commands-you-must-know/
22
part four: tuning
30. Use the top command, I can see this Ubuntu VM is only using up about 361 MB RAM. I’m shutting it
down, and scaling back RAM here to 512 MB.
$ sudo poweroff
Then power it back on.
23
Top-quality papers guaranteed
100% original papers
We sell only unique pieces of writing completed according to your demands.
Confidential service
We use security encryption to keep your personal data protected.
Money-back guarantee
We can give your money back if something goes wrong with your order.
Enjoy the free features we offer to everyone
-
Title page
Get a free title page formatted according to the specifics of your particular style.
-
Custom formatting
Request us to use APA, MLA, Harvard, Chicago, or any other style for your essay.
-
Bibliography page
Don’t pay extra for a list of references that perfectly fits your academic needs.
-
24/7 support assistance
Ask us a question anytime you need to—we don’t charge extra for supporting you!
Calculate how much your essay costs
What we are popular for
- English 101
- History
- Business Studies
- Management
- Literature
- Composition
- Psychology
- Philosophy
- Marketing
- Economics