Codementor Events

Server deployment with Python: From A to Z.

Published Feb 28, 2021
Server deployment with Python: From A to Z.

In this tutorial I will illustrate how to configure a server to run a web application with no other tool than Python.

At the end of it you will have:

  • Learned what are the components of a server's configuration for web deployment.
  • Got a reproducible Python template, via GitHub gists.

Background

Back in the days when Docker didn't exist, I used to configure servers in cloud environment via Python code. It basically was a Python script/project that I ran in my (local) machine and it would execute commands in the (remote) server.

I haven't used this project of mines in a while, but I stumbled upon it this past week, and I realized two things:

  • Today, I wouldn't enjoy using it. Docker is much more comfortable.
  • However, understanding that code gives great insights about how to configure remote servers!

In fact, the reason why I wrote it is that, even earlier, I used to configure all servers manually: log-in via ssh, install required packages, etc.

After having done it a bunch of times, I decided to automate the procedure. This is why I built the project that I am going to show you.

This means that the code highlights the details of everything you should do in a server, unlike using Docker where most details are hidden.

Therefore, this post can be helpful for two reasons:

  1. If you don't want, or cannot, use Docker.
  2. It will give you a pretty good understanding of what is happening in the remote machine! Knowledge always matters.

Tutorial Overview

The code is organized in functions where each function executes a task in the server, even if it's run in the local machine.

Therefore, before looking at the code, it's worth thinking about what are the steps that you would need to do (manually) if you didn't read this tutorial.

In fact, a manual deployment in a vanilla server is a must-do for a developer. At least once in your lifetime, spin up a vanilla machine in any cloud provider (GCP, AWS, Linode, Digital Ocean, Azure, etc.) and do a full-fledged production deployment. At least once!!

You will learn a lot from doing it manually. Let's first see an overview of each individual step, and then look again at each of them together with the code.

Step 1 - Basic machine configuration

First, you should update all packages. When you create a new instance it usually runs a slightly outdated Operating System version, so you have to update the packages. For instance, in Debian/Ubuntu this is done with apt-get update.

After that, you will have to install all packages that are specific to your project. In the code for this tutorial I will install many packages including postgresql-server which allows you to run a web application that uses PostgreSQL in the backend.

Then, because you want to run a Python application you need to have Python configured in the server.

Most Operating Systems come with a default Python binary pre-installed. However, that's not a very good solution, in my opinion.

The reason is that you may have developed your application with Python3.7, but then the server has Python3.9. Things will crash and it will hard to understand why.

That's why my code also has specific instructions to set up the correct Python version in 3 sub-steps.

  • Download Python source code.
  • Compile it.
  • Link it as executable in the machine.

Finally, as with most Python application, you'd want to create a virtualenv just for this application.

If your app has a dedicated server, meaning that it's the only software running in it, then you may skip this part. My code anyway contains a function also to install virtualenvwrapper and set-up a virtual-env dedicated to this app.

That's the end of the first step.

Step 2 - Install your application

For the server to run your application, this must be installed! If your app is an executable that can be installed then you should download it (via curl or wget) or copy it from the local machine to the remote server (via scp).

A more common case with Python applications, at least for me, is when to run the app we need the source code. In this case, you need to put the source code in the remote server. There are two common ways to do it:

  1. Using git clone, so that you take the source code from a git repository that's online.
  2. Using scp, so that you copy the source code from your local machine to the remote one.

In my code I will use option 1, so I can show you a nice way I found back then to write code that switches between branches and commits in the repository (for example if you want to do a rollback).

After that, if you have a requirements.txt file, like most Python projects do, then you will have to install all those Python modules within the virtualenv. I will show how to do it in a minute.

Step 3 - Run the app!

This part is the easiest, especially if you use a production-ready, robust web server. I chose Gunicorn, but there are many alternatives that are equally easy to use.

These are the three steps that you have to go through, at a high level. Now I want to look at them more carefully with you, but... there's one missing piece: how can we send commands to execute to the server, without doing a manual log-in via ssh?

Connection set-up

Luckily for us, a Python package called fabric solves this problem. All you need is a function like this that creates a connection to the server:

from os import environ

from fabric import Connection

def create_conn():
    # Switch the two lines if you connect via PEM Key
    # instead of password.
    params = {
        #'key_filename': environ['SSH_KEY_PATH']}
        'password': environ['REMOTE_PASSWORD']
    }
    conn = Connection(
        host=environ['REMOTE_HOST'],
        user=environ['REMOTE_USER'],
        connect_kwargs=params,
    )
    return conn

The connection object returned by this function will be used throughout the rest of the code.

Part 1 in depth

In part 1 we want to configure the server at the most basic level: OS libraries, programming languages, etc.

If we were do it manually, we would ssh-in and then run a bunch of ap-get install ... from the shell.

Well, we can do the same in Python code, thanks to the connection object. Here's the code.

def _create_vm(conn):
    _install_packages(conn)
    _install_python(conn)
    _install_venv(conn)


def _install_packages(conn):
    conn.sudo('apt-get -y update')
    conn.sudo('apt-get -y upgrade')
    conn.sudo('apt-get install -y build-essential')
    #conn.sudo('apt-get install -y checkinstall')
    conn.sudo('apt-get install -y libreadline-gplv2-dev')
    conn.sudo('apt-get install -y libncurses-dev')
    conn.sudo('apt-get install -y libncursesw5-dev')
    conn.sudo('apt-get install -y libssl-dev')
    conn.sudo('apt-get install -y libsqlite3-dev')
    conn.sudo('apt-get install -y tk-dev')
    conn.sudo('apt-get install -y libgdbm-dev')
    conn.sudo('apt-get install -y libpq-dev')
    conn.sudo('apt-get install -y libc6-dev')
    conn.sudo('apt-get install -y libbz2-dev')
    conn.sudo('apt-get install -y zlib1g-dev')
    conn.sudo('apt-get install -y openssl')
    conn.sudo('apt-get install -y libffi-dev')
    conn.sudo('apt-get install -y python3-dev')
    conn.sudo('apt-get install -y python3-setuptools')
    conn.sudo('apt-get install -y uuid-dev')
    conn.sudo('apt-get install -y lzma-dev')
    conn.sudo('apt-get install -y wget')
    conn.sudo('apt-get install -y git')
    conn.sudo('apt-get install -y postgresql')


def _install_python(conn):
    """Install python 3.7 in the remote machine."""

    res = conn.run('python3 --version')
    if '3.7' in res.stdout.strip():
        # Python >= 3.7 already exists
        return

    conn.run('rm -rf /tmp/Python3.7 && mkdir /tmp/Python3.7')

    with conn.cd('/tmp/Python3.7'):
        conn.run('wget https://www.python.org/ftp/python/3.7.0/Python-3.7.0.tar.xz')
        conn.run('tar xvf Python-3.7.0.tar.xz')

    with conn.cd ('/tmp/Python3.7/Python-3.7.0'):
        conn.run('./configure --enable-optimizations')
        conn.run('make')

    # see https://github.com/pyinvoke/invoke/issues/459
    conn.sudo('bash -c "cd /tmp/Python3.7/Python-3.7.0 && make altinstall"')


def _install_venv(conn):
    """Install virtualenv, virtualenvwrapper."""

    res = conn.run('which python3.7')
    res = res.stdout.strip()
    py_path = res

    conn.sudo('apt install -y virtualenvwrapper')

    # for a standard Debian distro
    venv_sh = '/usr/share/virtualenvwrapper/virtualenvwrapper.sh'

    conn.run('echo >> ~/.bashrc')  # new line
    conn.run(f'echo source {venv_sh} >> ~/.bashrc')
    conn.run('echo >> ~/.bashrc')  # new line
    conn.run('echo export LC_ALL=en_US.UTF-8 >> ~/.bashrc')
    conn.run('source ~/.bashrc')
    env = environ['VENV_NAME']
    with conn.prefix(f'source {venv_sh}'):
        conn.run(f'mkvirtualenv -p {py_path} {env}')

The code above implements the three sub-steps I discussed before:

  1. Install all libraries that we need at the OS-level (in this case there are also git and postgresql-server, among many others).
  2. Install the specific Python version that we want, compiling it from source code.
  3. Install virtualenvwrapper. In fact, you could use any virtualenv software management, or none if the machine is dedicated to only one Python app.

If you look carefully at the code, you will understand the programming pattern in one second: every task is accomplished by creating a connection object and using its .run() method, with argument the same command you'd run manually.

That's one more reason why I said it's very important to do a manual deployment at least once!

Part 2 in depth

In part 2 I said we want to do two things:

  1. Get the source code of the app in the machine.
  2. Install all Python requirements.

Let's look at the first one. In the next function, I will simply use some of the git commands to make sure the code is pulled from a repository in the machine.

The function may look a bit complex at first sight, but it's not. I wrote a general function that allows you to checkout a specific branch, by name, or a specific commit, by its hash, and that's why the function looks complicated.

But you could simplify it and always do something like git pull origin master, and it would still work!

def _pull_repo(conn, branch=None, commit=None):
    if branch and commit:
        raise ValueError('Cannot provide both branch name and commit hash')
    source = environ['GIT_DIR']
    if not branch:
        branch = environ['GIT_DEFAULT_BRANCH']
    repo = environ['REPO_URL']
    if commit:
        print('Hash provided. Resetting to that commit.')
        conn.run(
            f"cd {source} && "
            'git stash && '
            f'git reset --hard {commit} && '
            'git checkout -B tmp_branch'
        )
    else:
        if conn.run(f'test -e {source}/.git', warn=True).ok:
            print('Repo already exists.')
        else:
            print('Repo did not exist. Creating it...')
            conn.run(f'git clone {repo} {source}')
            conn.run(f'cd {source} && git remote set-url origin {repo}')
        print('Checking out the requested branch...')
        conn.run(f'cd {source} && git fetch origin && git checkout {branch} && git pull origin {branch}')
    current_hash = conn.run(f'cd {source} && git log -n 1 --format=%H', hide='both')
    current_hash = current_hash.stdout.strip()
    print(f'Checked out {current_hash}')
    return current_hash

You probably noticed that there are some variables loaded from environ. I will get to that in one minute, but basically the reason is that you may want to keep the repository name and the user credentials secret, so it's better not to have them directly in the code. Anyway, bear with me for a minute and this part will be clear.

The second part, to install the Python requirements it's much easier. The only trick here is that I use conn.cd() and conn.prefix() to activate the virtualenv before installing the requirements. Other than that, the main command is exactly like you would run manually: pip install -r requirements.tx

def _install_project(conn):
    repo_path = environ['GIT_DIR']
    venv_name = environ['VENV_NAME']
    venv_sh = 'source /usr/share/virtualenvwrapper/virtualenvwrapper.sh'
    with conn.cd(repo_path):
        with conn.prefix(
            f'{venv_sh} && workon {venv_name}'
        ):
            conn.run('pip install --upgrade pip')
            conn.run('pip install -r requirements.txt')
            # If your project as a `setup.py` then
            # install project.
            #conn.run('pip install -e .')

Part 3 in depth

Part 3 is the easiest, because I use Gunicorn as production web server, and to run it you just need to run a simple line: gunicorn <app_module_path>.

def _restart_web(conn):
    try:
        conn.sudo('pkill gunicorn')
    except:
        pass # may not be running at all.
    repo_path = environ['GIT_DIR']
    venv_name = environ['VENV_NAME']
    venv_sh = 'source /usr/share/virtualenvwrapper/virtualenvwrapper.sh'
    with conn.cd(repo_path):
        with conn.prefix(
            f'{venv_sh} && workon {venv_name}'
        ):
            conn.run("gunicorn app:app -b 0.0.0.0:8080 -w 3 --daemon")

OK, OK... I do a few more things in the code:

  1. First, I stop the process gunicorn if it's already running. This will cause a bit of downtime in the app.
  2. Then I use some configuration arguments to the new gunicorn process to make sure it runs correctly: -b it binds it to the port I want (8080 in this example); -w specifies the number of workers (processes); --daemon runs it in the background, so that you don't have to keep the connection open.

And that's it! The last thing we have to look at is how to load the environment variables.

There's a ton of ways to do it. In this project I decided to create a file secret.py that defines the variables using os.environ[..] = .. and then do import secret from the main file.

For the proof of concepts I searched only for a ready-to-use Flask app. I wanted to use an app that was NOT developed by me, to show you how flexible is this code.

I found a sample app published on GitHub by Digital Ocean. I have no affiliation with them, but it looked good for this project so that's why you see it in the secret.py file. Here it is:

# File secret.py

from os import environ, path

### Connection
environ['REMOTE_HOST'] = '172.104.239.248'
environ['REMOTE_USER'] = '****'
environ['REMOTE_PASSWORD'] = '********+'
#
## Python venv
environ['VENV_NAME'] = 'prod-api'
#
### Git
environ['GIT_DIR'] = '~/app'
environ['GIT_DEFAULT_BRANCH'] = 'main'
environ['REPO_URL'] = 'https://github.com/digitalocean/sample-flask.git'

In the main file, that you can find in this gist, I simply do import secret and all environment variables are loaded. The two files main.py and secret.py must be in the same directory.

One last trick!

The code dates a few years back, but there's one last trick that I had implemented and that I want to share with you.

The motivation is that sometimes I will want to run one of the functions we saw, and just one. And some times they may need arguments in input (this is the case for _pull_repo).

This means that I want to have a __main__ entrypoint that only runs a function that I want to run in that specific moment, and it also passes to it any argument that is coming from command line.

Here's what I came up with two years ago.

def main(tasks):
    if len(tasks) <= 1:
        print('No task name found')
        return
    i = 1
    while i < len(tasks):
        try:
            fn = getattr(sys.modules[__name__], tasks[i])
        except AttributeError:
            print(f'Cannot find task {tasks[i]}. Quit.')
            return
        params = {}
        j = i + 1
        while j < len(tasks) and '=' in tasks[j]:
            k, v = tasks[j].split('=')
            params[k] = v
            j += 1
        i = j
        print(f'Function is {fn}')
        print(f'args are {params}')
        fn(**params)


if __name__ == '__main__':
    '''
    Run it with
    $ python main <task1> <key1-task1>=<value1-task1> <key2-task1>=<value2-task2> <task2> <key1-task2>=<value1-task2>
    E.g.
    $ python main create_vm
    
    Or
    $ python main pull_repo branch=develop
    '''
    import sys
    tasks = sys.argv
    main(tasks)

Let's test it!

It's time to test. I want to tell you beforehand that the result is good: I was very happy that my code ran perfectly even if I hadn't use it in a while!

Here's what I did.

First, I created a small machine on Linode. I have no affiliation with them, I just wanted to use a different provider, since the app code is by Digital Ocean.

The smallest machine on Linode costs $5 (other providers have the same price), and the entire test lasted less than 5 minutes, so I spent just a few cents. I chose Debian 10 as OS.

Then I run copied the host, username and root password from the Linode console into the secret.py file.

Then I downloaded the GitHug gist, keeping the same file name azPyDeploy.py in the same directory where secret.py is.

And finally, I ran four simple commands.

python azPyDepl.py create_vm
python azPyDepl.py pull_repo
python azPyDepl.py install_project
python azPyDepl.py restart_web

The first one took a few minutes (remember that it compiles and install Python), and when everything was done the web app was running in the server!!

Screenshot from 2021-02-28 16-09-12.png

That's great! A Flask app deployed purely via Python. I have to say that a big thank goes to the Fabric developers.

I then took down the server (so the IP you see in the figure doesn't exist anymore), to avoid spending more $$.

Final comments and code

I know that this is not like you would do production deployment today. Even I use Docker all the time and some more advanced infrastructure system provided by AWS.

If for some reason you cannot use more automated systems, then the project in this tutorial can be helpful.

But overall, the main benefit of this code for me is the value of the learning process. Understanding servers and how things work behind the scenes has been very important in my professional career, and I hope this post shines some light and helps you too!

Here's the full code, in case the gist doesn't work.

Let me know if you ran into any problem while reproducing this tutorial.

# This script needs
# $ pip install fabric

from os import environ

from fabric import Connection

# Create a file `secret.py` in the same directory as this one
# and add in it the credentials to connect to the server and GitHub.
# Here is a template.
#############
# File secret.py

#from os import environ, path
#
### Connection
#environ['REMOTE_HOST'] = '172.104.239.248'
#environ['REMOTE_USER'] = '****'
#environ['REMOTE_PASSWORD'] = '********'
#
## Python venv
#environ['VENV_NAME'] = 'prod-api'
#
### Git
#environ['GIT_DIR'] = '~/app'
#environ['GIT_DEFAULT_BRANCH'] = 'main'
#environ['REPO_URL'] = 'https://github.com/digitalocean/sample-flask.git'
#############
import secret


def create_conn():
    # Switch the two lines if you connect via PEM Key
    # instead of password.
    params = {
        #'key_filename': environ['SSH_KEY_PATH']}
        'password': environ['REMOTE_PASSWORD']
    }
    conn = Connection(
        host=environ['REMOTE_HOST'],
        user=environ['REMOTE_USER'],
        connect_kwargs=params,
    )
    return conn


######################
# Internal Functions #
######################


def _create_vm(conn):
    _install_packages(conn)
    _install_python(conn)
    _install_venv(conn)


def _install_packages(conn):
    conn.sudo('apt-get -y update')
    conn.sudo('apt-get -y upgrade')
    conn.sudo('apt-get install -y build-essential')
    #conn.sudo('apt-get install -y checkinstall')
    conn.sudo('apt-get install -y libreadline-gplv2-dev')
    conn.sudo('apt-get install -y libncurses-dev')
    conn.sudo('apt-get install -y libncursesw5-dev')
    conn.sudo('apt-get install -y libssl-dev')
    conn.sudo('apt-get install -y libsqlite3-dev')
    conn.sudo('apt-get install -y tk-dev')
    conn.sudo('apt-get install -y libgdbm-dev')
    conn.sudo('apt-get install -y libpq-dev')
    conn.sudo('apt-get install -y libc6-dev')
    conn.sudo('apt-get install -y libbz2-dev')
    conn.sudo('apt-get install -y zlib1g-dev')
    conn.sudo('apt-get install -y openssl')
    conn.sudo('apt-get install -y libffi-dev')
    conn.sudo('apt-get install -y python3-dev')
    conn.sudo('apt-get install -y python3-setuptools')
    conn.sudo('apt-get install -y uuid-dev')
    conn.sudo('apt-get install -y lzma-dev')
    conn.sudo('apt-get install -y wget')
    conn.sudo('apt-get install -y git')
    conn.sudo('apt-get install -y postgresql')


def _install_python(conn):
    """Install python 3.7 in the remote machine."""

    res = conn.run('python3 --version')
    if '3.7' in res.stdout.strip():
        # Python >= 3.7 already exists
        return

    conn.run('rm -rf /tmp/Python3.7 && mkdir /tmp/Python3.7')

    with conn.cd('/tmp/Python3.7'):
        conn.run('wget https://www.python.org/ftp/python/3.7.0/Python-3.7.0.tar.xz')
        conn.run('tar xvf Python-3.7.0.tar.xz')

    with conn.cd ('/tmp/Python3.7/Python-3.7.0'):
        conn.run('./configure --enable-optimizations')
        conn.run('make')

    # see https://github.com/pyinvoke/invoke/issues/459
    conn.sudo('bash -c "cd /tmp/Python3.7/Python-3.7.0 && make altinstall"')


def _install_venv(conn):
    """Install virtualenv, virtualenvwrapper."""

    res = conn.run('which python3.7')
    res = res.stdout.strip()
    py_path = res

    conn.sudo('apt install -y virtualenvwrapper')

    # for a standard Debian distro
    venv_sh = '/usr/share/virtualenvwrapper/virtualenvwrapper.sh'

    conn.run('echo >> ~/.bashrc')  # new line
    conn.run(f'echo source {venv_sh} >> ~/.bashrc')
    conn.run('echo >> ~/.bashrc')  # new line
    conn.run('echo export LC_ALL=en_US.UTF-8 >> ~/.bashrc')
    conn.run('source ~/.bashrc')
    env = environ['VENV_NAME']
    with conn.prefix(f'source {venv_sh}'):
        conn.run(f'mkvirtualenv -p {py_path} {env}')


def _pull_repo(conn, branch=None, commit=None):
    if branch and commit:
        raise ValueError('Cannot provide both branch name and commit hash')
    source = environ['GIT_DIR']
    if not branch:
        branch = environ['GIT_DEFAULT_BRANCH']
    repo = environ['REPO_URL']
    if commit:
        print('Hash provided. Resetting to that commit.')
        conn.run(
            f"cd {source} && "
            'git stash && '
            f'git reset --hard {commit} && '
            'git checkout -B tmp_branch'
        )
    else:
        if conn.run(f'test -e {source}/.git', warn=True).ok:
            print('Repo already exists.')
            # run("cd %s && git pull upstream %s" % (source_dir, branch))
            #conn.run(f'cd {source} && git fetch origin {branch}')
            #conn.run(f'cd {source} && git reset --hard origin/{branch}')
        else:
            print('Repo did not exist. Creating it...')
            conn.run(f'git clone {repo} {source}')
            conn.run(f'cd {source} && git remote set-url origin {repo}')
        print('Checking out the requested branch...')
        conn.run(f'cd {source} && git fetch origin && git checkout {branch} && git pull origin {branch}')
    current_hash = conn.run(f'cd {source} && git log -n 1 --format=%H', hide='both')
    current_hash = current_hash.stdout.strip()
    print(f'Checked out {current_hash}')
    return current_hash


def _install_project(conn):
    repo_path = environ['GIT_DIR']
    venv_name = environ['VENV_NAME']
    venv_sh = 'source /usr/share/virtualenvwrapper/virtualenvwrapper.sh'
    with conn.cd(repo_path):
        with conn.prefix(
            f'{venv_sh} && workon {venv_name}'
        ):
            conn.run('pip install --upgrade pip')
            conn.run('pip install -r requirements.txt')
            # If your project as a `setup.py` then
            # install project.
            #conn.run('pip install -e .')


def _restart_web(conn):
    try:
        conn.sudo('pkill gunicorn')
    except:
        pass # may not be running at all.
    repo_path = environ['GIT_DIR']
    venv_name = environ['VENV_NAME']
    venv_sh = 'source /usr/share/virtualenvwrapper/virtualenvwrapper.sh'
    with conn.cd(repo_path):
        with conn.prefix(
            f'{venv_sh} && workon {venv_name}'
        ):
            conn.run("gunicorn app:app -b 0.0.0.0:8080 -w 3 --daemon")


#####################################
# Functions used from the __main__ ##
#####################################


def create_vm(**kwargs):
    _create_vm(create_conn())


def pull_repo(**kwargs):
    conn = create_conn()
    _pull_repo(conn, **kwargs)


def install_project(**kwargs):
    _install_project(create_conn())


def restart_web(**kwargs):
    _restart_web(create_conn())


def main(tasks):
    if len(tasks) <= 1:
        print('No task name found')
        return
    i = 1
    while i < len(tasks):
        try:
            fn = getattr(sys.modules[__name__], tasks[i])
        except AttributeError:
            print(f'Cannot find task {tasks[i]}. Quit.')
            return
        params = {}
        j = i + 1
        while j < len(tasks) and '=' in tasks[j]:
            k, v = tasks[j].split('=')
            params[k] = v
            j += 1
        i = j
        print(f'Function is {fn}')
        print(f'args are {params}')
        fn(**params)


if __name__ == '__main__':
    '''
    Run it with
    >>python main <task1> <key1-task1>=<value1-task1> <key2-task1>=<value2-task2> <task2> <key1-task2>=<value1-task2>
    E.g.
    $ python main create_vm
    '''
    import sys
    tasks = sys.argv
    main(tasks)
Discover and read more posts from Pietro Grandinetti, PhD
get started