Postgres Schema level migration using Alembic for Python

The popular database choice among python developers has been PostgreSQL, One of the main reason behind it is PostgreSQL Schemas. Schemas provide the ability to group collections of tables of the database under one name.

Schemas are homologous to directories at the operating system level, except that schemas cannot be nested.

Some advantages of using PostgreSQL schemas are:

  1. To allow many users to use one database without interfering with each other.
  2. To organize database objects into logical groups to make them more manageable.
  3. To allow users to make use of the tables in the schema,  privileges need to be granted by the owner.

Most of the Python web frameworks use SqlAlchemy for the ORM and Alembic for the database migration as Alembic uses SqlAlchemy itself for the comparison of current database version and Python classes (chosen for ORM).

The main advantage of using Alembic migration is that it auto-generates the migration script by comparing the Python classes (chosen for ORM) and the current version of the database.

When there are more than one schemas present in the database, it’s always a good idea to have the separate migration setup for each schema as Alembic doesn’t understand Schemas.

Consider the scenario where you have two Schemas myschema having a table User,  schema1 having the tables Contact and  Address.

Now if we try to add another table named Marks to myschema, as the separate alembic setup is not done for the Schemas, an auto-generated script will result in the creation of the table User, Marks and drop the tables Address and Contact.

To avoid the above scenario we must set up the separate schema migration which will compare the tables of the desired Schema with the current version of the database.

While setting up the SqlAlchemy,  you can provide the schema as:

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.schema import MetaData
import sqlalchemy


metadata = MetaData(schema='myschema')

Base = declarative_base(metadata=metadata, name='MySchemaBase')

sqlalchemy.event.listen( 
  Base.metadata,
  'before_create',
   sqlalchemy.DDL("CREATE SCHEMA IF NOT EXISTS {schema}".format(schema=metadata.schema)
)

class User(Base):
    __tablename__ = 'user'

    Id = Column(Integer, primary_key=True)
    email = Column(String(50), nullable=False)

For setting up the Alembic migration for the above schema:

1. First, create the alembic using command alembic init <name>

alembic init myschema_alembic

This will create the directory structure as below:

yourproject/
    myschema_alembic/
        env.py
        README
        script.py.mako
        versions/  
myschema_alembic.ini   

2. Modify the database connection URL in the myschema_alembic.ini

sqlalchemy.url = postgresql://<username>:<password>@localhost/<db-name>

3. Modify the env.py file:

  • Running the auto-generate migration calls run_migrations_online() of the env.py

4. Modify the metadata variable and provide the above-created metadata

target_metadata = <your-project-structure>.Base.metadata

5. Modify the run_migrations_online( ) function:

    connectable = engine_from_config(
        config.get_section(config.config_ini_section),
        prefix='sqlalchemy.',
        poolclass=pool.NullPool)

    with connectable.connect() as connection:
        context.configure(
            connection=connection,
            target_metadata=target_metadata, 
           # Table name must be unique for all the schemas 
            version_table='myschema_alembic_version',
             version_table_schema=target_metadata.schema,
            include_schemas=True,
            include_object = include_object
        )

        with context.begin_transaction():
            context.run_migrations()

6. Create the function include_object

This function will check that whether the database table belongs to the desired schema or not, if not then it will be skipped for auto-generation.

def include_object(object, name, type_, reflected, compare_to):
    if type_ == 'table' and object.schema != target_metadata.schema:
        return False

    return True

Note : Copy the above code before the function `run_migrations_online( )`.

That’s it, Now we are ready to auto-generate the migration scripts for the particular schema.

Hope you found it useful. Thanks for reading.  

About CauseCode: We are a technology company specializing in Healthtech related Web and Mobile application development. We collaborate with passionate companies looking to change health and wellness tech for good. If you are a startup, enterprise or generally interested in digital health, we would love to hear from you! Let's connect at bootstrap@causecode.com
Have you subscribed to our blogs and newsletter? If not, what are you waiting for?  Click Here

Leave a Reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE!

Do you want to get articles like these in your inbox?

Email *

Interested groups *
Healthtech
Business
Technical articles

Archives