We are using sqlalchemy's autoload feature to do column mapping to prevent hardcoding in our code.
class users(Base):__tablename__ = 'users'__table_args__ = {'autoload': True,'mysql_engine': 'InnoDB','mysql_charset': 'utf8'}
Is there a way to serialize or cache autoloaded metadata/orms so we don't have to go through the autoload process every time we need to reference our orm classes from other scripts/functions?
I have looked at beaker caching and pickle but haven't found a clear answer if it is possible or how to do it.
Ideally we run the autload mapping script only when we have committed changes to our database structure but reference a non-autoload/persistent/cached version of our database mapping from all other scripts/functions,
Any ideas?
What I am doing now is to pickle the metadata after running the reflection through a database connection (MySQL) and once a pickle is available use that pickled metadata to reflect on the schema with the metadata bound to an SQLite engine.
cachefile='orm.p'
dbfile='database'
engine_dev = create_engine(#db connect, echo=True)
engine_meta = create_engine('sqlite:///%s' % dbfile,echo=True)
Base = declarative_base()
Base.metadata.bind = engine_dev
metadata = MetaData(bind=engine_dev)# load from pickle
try:with open(cachefile, 'r') as cache:metadata2 = pickle.load(cache)metadata2.bind = engine_metacache.close()class Users(Base):__table__ = Table('users', metadata2, autoload=True)print "ORM loaded from pickle"# if no pickle, use reflect through database connection
except:class Users(Base):__table__ = Table('users', metadata, autoload=True)print "ORM through database autoload"# create metapickle
metadata.create_all()
with open(cachefile, 'w') as cache:pickle.dump(metadata, cache)cache.close()
Any comments if this is alright (it works) or there is something I can improve?