I needed to migrate live data from my production server to my development environment which would be postgresql to sqlite.
Sqlite3 is an amazing database fort development – it would be a nightmare to get postgresql and psycopg2 working on 3 different environments (Mac, windows, cygwin), let alone one.
I’ve figured out a way to export data reliably although it is a bit of a pain. Let me document it for future reference.
Generate SQL Dump
Use the –inserts flag to make all of the statements an insert statement. This is helpful for several reasons… it won’t kill the whole import if one line fails for example.
pg_dump --inserts db_name -t table_name > pg_dump.sql
Modify file to be sqlite compatible
Next step, open the dump, strip out the PG specific stuff before the CREATE TABLE statement.
Next, modify the CREATE TABLE statement to be compatible with sqlite.
First, make sure your primary key field is specified with PRIMARY_KEY, and AUTO_INCREMENT. This will cause headaches if you don’t have an auto incrementing key.
CREATE TABLE table_name ( id integer PRIMARY_KEY AUTO_INCREMENT, .... )
We must use the statement generated by pg_dump because the field ordering is very important – a fresh syncdb would generate fields in alphabetical order.
Next, make sure there are no PG specific functions that sqlite can’t understand — for me, the date functions.
Simply remove the special clauses on timestamp fields such as DEFAULT now()
Next, replace all instances of true with 1, and false with 0.
sed -i 's/true/1/g' pg_dump.sql sed -i 's/false/0/g' pg_dump.sql
Destroy your table in sqlite, and start executing the file
Drop your table, since we’ll be re-creating it via reading our dump file. Then, use the .read command to start executing SQL.
DROP TABLE table_name; .read pg_dump.sql
Wait a long time, and when it’s done you’ll have your data!
PS: A common gotcha: in sqlite commands, do NOT close with a semicolon, it will read as part of your argument.