SlideShare a Scribd company logo
Meetup Feb 17th, 2014
Migrating from MongoDB to Neo4j

1
Agenda
• Intros
– name, what you do, interest in Neo4j?

• Case Study, Moving from MongoDB
– considerations, why and how
– steps taken, findings
– using the Batch Importer

• Group Discussion
– experiences from others?
source:
http://neo4j.rubyforge.org/guides/why_gra
ph_db.html
Case Study, Moving from
MongoDB

source:
http://neo4j.rubyforge.org/guides/why_gra
ph_db.html
Our Startup
– A mobile drink discovery platform: explore new
drinks, post photos, learn new facts, follow other drink

afficionados (whisky, beer, wine, cocktail experts)

4
Using MongoDB
– Pluses for us:
• flexible (by far, most substantial benefit)
• good documentation
• easy to host and integrate with our code

– Downsides for us:
• lots of collections needed (i.e. for mapping data, many to many
relationships)
• queries with multiple joins

5
Relying on Redis
– Needed to cache a lot in Redis
– We cached
• user profile
• news feed

– Too much complexity
• another denormalized data model to manage
• more difficult to test
• increase in bugs and edge cases

– Still awesome, but just relied on it too much

6
Evaluating Neo4j
– Our goals
• simplify data model (less
denormalization)
• speed up highly relational queries
• keep our flexibilty (schemaless data
model)

– Considerations
• how will we host?
• will it make our codebase more

complex?
• support?
• easy to troubleshoot production
issues?
7
How We Evaluated
1. We set up an instance on Amazon EC2 (though
Heroku was still an option as well)
2. Imported realistic production data with the Batch
Importer

3. Took our most popular, slowest query and tested it
4. Wrote more example queries for standard use cases
(creating nodes, relationships, etc), easy to use?
5. Ran a branch of our code with Neo4j for a month
8
How We Evaluated
1.

Made sure we could get good support for the product

2.

Determined effort involved in hosting it on Amazon EC2 (though Heroku was
also an option)

3.

Determined effort needed to import bulk data and change our data model

4.

Audited each line of code and made a list of the types of queries we’d need.
Estimated effort involved in updating our codebase.

5.

Imported production data and took our most popular, slowest query and tested

performance.
6.

Wrote other more common queries and tested performance more (using
Apache Benchmark)

7.

Was the driver (this case Ruby) support okay and was it well-written? Would it

be maintained years from now?
8.

Test it out as a code branch for at least a month
9
Our Findings
1. So far so good (been testing for a few weeks now)

2. Set up an instance on Amazon EC2. Wasn’t that bad.
3. Complex queries were a lot faster
4. Ruby driver (Neography) does the job though not
perfect.
5. Planning to use Neo4j’s official Ruby library once
they finish version 3.0 (which seems to not require
JRuby)
10
Our Findings
6. We needed to create an abstraction layer in the code
to simplify reads and write with the database. Wasn’t
that bad though.
7. Our data model got a lot more intuitive. No more

map collections (yay)
8. We can now implement recommendations a lot more
easily when we want to

9. No longer need to rely heavily on Redis and caching
11
Our Findings

10.We think about our data differently now
11. Managing the data model is actually fun

12
Tutorial on Batch Importer
1. Our example involves real data

2. We will be using Ruby to generate .CSV files
representing nodes and relationships
3. Beware, existing documentation is “not good” to put
it lightly
4. Using the 2.0 version! (Precompiled binary)

https://github.com/jexp/batch-import/tree/20
13
Steps
1. Install Neo4j
2. Download a binary version of batch importer
3. Batch Importer requires .CSV files. One type of file
will import nodes, another will import relationships

4. Decide on fields that make nodes unique
1. ex: user has a username, a drink has a name
2. makes the process of mapping node relationships later a
lot easier too
14
.CSV Format for Nodes
• Tab separated columns
• Importing Nodes
– node property names in first row
– format is <field name>:<field type> (defaults to String)
– all rows after that are corresponding property values

• Importing Relationships
– sepate .CSV file, source node’s unique field in first col, target node’s unique
field in second col, the word “type” in the 3rd column
– since we’re arleady using unique index on nodes, it’s easy to relate them!

– can import multiple relations between two types of nodes in the same .CSV
file

15
Creating Drink Nodes
• Example output (tab delimited)

16
Creating Drink Nodes
namespace :export

do

require 'csv'

task :generate_drink_nodes => :environment do

CSV.open("drink_nodes.csv", "wb", { :col_sep => "t" }) do |csv|
csv << ["name:string:drink_name_index", "type:label", "name"]
Drink.all.each do |drink|

csv << [drink.name, "Drink", drink.name]
end
end
end

end
17
Running the Script
• Make sure all nodes, relationships deleted from Neo4j
–

MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r

• Stop your Neo4j server before importing

• Run the import command (per the binary batch
importer we downloaded earlier):
–

./import.sh ~/neo4j-community-2.0/data/graph.db user_nodes.csv

18
Creating User Nodes
• Example output (tab delimited):

19
Creating User Nodes
CSV.open("user_nodes.csv", "wb", { :col_sep => "t" }) do |csv|

csv << ["username:string:user_username_index",
"type:label",
"first_name",
"last_name"]

User.all.each do |user|
csv << [user.username, "User", user.first_name, user.last_name]
end

20
User to User Relationships
• NOTE: it’s easy to relate users to users since we
already have an index set up.
• Example output (tab delimited):

21
User to User Relationships
CSV.open("user_rels.csv", "wb", { :col_sep => "t" }) do |csv|
csv << ["username:string:user_username_index",
"username:string:user_username_index",
"type"]
User.all.each do |user|
user.following.each do |other_user|
csv << [user.username, other_user.username, "FOLLOWS"]
end
user.followers.each do |other_user|
csv << [other_user.username, user.username, "FOLLOWS"]
end
end
end
22
User to Drink Relationships
• Example output:

23
User to Drink Relationships
CSV.open("user_drink_rels.csv", "wb", { :col_sep => "t" }) do |csv|
csv <<
["username:string:user_username_index", "name:string:drink_name_index", "type"]
User.all.each do |user|
user.liked_drinks.each do |drink|

csv << [user.username, drink.name, "LIKED"]
end
user.disliked_drinks.each do |drink|
csv << [user.username, drink.name, "DISLIKED"]
end
user.drink_journal_entries.each do |entry|
csv << [user.username, entry.drink.name, "JOURNALED"]
end
end
end
24
Test Your Data
• Test with some cypher queries
– cheat sheet: http://docs.neo4j.org/refcard/2.0
– ex:

MATCH(n:User)-[r:FOLLOWS]-(o) WHERE
n.username='nickTribeca' RETURN n, r limit 50

• Note: you must limit your results or else the Data
Browser will become too slow to use
25
That’s the Tutorial
• You can always migrate data yourself without

the batch importer
– ie. script to query MongoDB data and insert it to
Neo4j in real time using your API

• Using the Batch Importer is really fast though
• Have found it faster to write and less error
prone than writing my own script
26
Group Q&A
• Thanks for coming
• @seenickcode

• nicholas.manning@gmail.com for
questions

• Want to present? Let me know.

27

More Related Content

Similar to Migrating from MongoDB to Neo4j - Lessons Learned (20)

PPTX
Windy City DB - Recommendation Engine with Neo4j
Max De Marzi
 
PDF
Neo4j GraphConnect summary
Máté Thurzó
 
PPTX
Introduction to Neo4j and .Net
Neo4j
 
PDF
managing big data
Suveeksha
 
PDF
There and Back Again, A Developer's Tale
Neo4j
 
PDF
ROAD TO NODES - Intro to Neo4j + NeoDash.pdf
Neo4j
 
PPTX
Relational to Graph - Import
Neo4j
 
PDF
Wanderu – Lessons from Building a Travel Site with Neo4j - Eddy Wong @ GraphC...
Neo4j
 
PPTX
A whirlwind tour of graph databases
jexp
 
PDF
GraphTour - Albelli: Running Neo4j on a large scale image platform
Neo4j
 
PDF
Neo4j (Part 1)
Bibhuti Regmi
 
PPTX
Running Neo4j in Production: Tips, Tricks and Optimizations
Nick Manning
 
PPTX
Running Neo4j in Production: Tips, Tricks and Optimizations
Nick Manning
 
PDF
Neo4j: The path to success with Graph Database and Graph Data Science
Neo4j
 
PDF
Neo4j Data Loading with Kettle
Neo4j
 
PDF
Neo4j manual-milestone
Shridhar Joshi
 
PDF
Neo4j Database and Graph Platform Overview
Neo4j
 
PPTX
New Features in Neo4j 3.4 / 3.3 - Graph Algorithms, Spatial, Date-Time & Visu...
jexp
 
PDF
Neo4j Training Cypher
Max De Marzi
 
PDF
002 Introducing Neo4j 5 for Administrators - NODES2022 AMERICAS Beginner 2 - ...
Neo4j
 
Windy City DB - Recommendation Engine with Neo4j
Max De Marzi
 
Neo4j GraphConnect summary
Máté Thurzó
 
Introduction to Neo4j and .Net
Neo4j
 
managing big data
Suveeksha
 
There and Back Again, A Developer's Tale
Neo4j
 
ROAD TO NODES - Intro to Neo4j + NeoDash.pdf
Neo4j
 
Relational to Graph - Import
Neo4j
 
Wanderu – Lessons from Building a Travel Site with Neo4j - Eddy Wong @ GraphC...
Neo4j
 
A whirlwind tour of graph databases
jexp
 
GraphTour - Albelli: Running Neo4j on a large scale image platform
Neo4j
 
Neo4j (Part 1)
Bibhuti Regmi
 
Running Neo4j in Production: Tips, Tricks and Optimizations
Nick Manning
 
Running Neo4j in Production: Tips, Tricks and Optimizations
Nick Manning
 
Neo4j: The path to success with Graph Database and Graph Data Science
Neo4j
 
Neo4j Data Loading with Kettle
Neo4j
 
Neo4j manual-milestone
Shridhar Joshi
 
Neo4j Database and Graph Platform Overview
Neo4j
 
New Features in Neo4j 3.4 / 3.3 - Graph Algorithms, Spatial, Date-Time & Visu...
jexp
 
Neo4j Training Cypher
Max De Marzi
 
002 Introducing Neo4j 5 for Administrators - NODES2022 AMERICAS Beginner 2 - ...
Neo4j
 

Recently uploaded (20)

PPTX
Manual Testing for Accessibility Enhancement
Julia Undeutsch
 
PDF
NLJUG Speaker academy 2025 - first session
Bert Jan Schrijver
 
PDF
[GDGoC FPTU] Spring 2025 Summary Slidess
minhtrietgect
 
PDF
Evolution: How True AI is Redefining Safety in Industry 4.0
vikaassingh4433
 
PDF
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
PPTX
Essential Content-centric Plugins for your Website
Laura Byrne
 
PDF
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
Edge AI and Vision Alliance
 
PDF
NASA A Researcher’s Guide to International Space Station : Earth Observations
Dr. PANKAJ DHUSSA
 
PPTX
Securing Model Context Protocol with Keycloak: AuthN/AuthZ for MCP Servers
Hitachi, Ltd. OSS Solution Center.
 
PPTX
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
PDF
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pdf
ghjghvhjgc
 
PDF
UiPath DevConnect 2025: Agentic Automation Community User Group Meeting
DianaGray10
 
PDF
🚀 Let’s Build Our First Slack Workflow! 🔧.pdf
SanjeetMishra29
 
PDF
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
PDF
Next Generation AI: Anticipatory Intelligence, Forecasting Inflection Points ...
dleka294658677
 
PDF
ICONIQ State of AI Report 2025 - The Builder's Playbook
Razin Mustafiz
 
PPTX
Digital Circuits, important subject in CS
contactparinay1
 
PDF
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
PPTX
Wondershare Filmora Crack Free Download 2025
josanj305
 
Manual Testing for Accessibility Enhancement
Julia Undeutsch
 
NLJUG Speaker academy 2025 - first session
Bert Jan Schrijver
 
[GDGoC FPTU] Spring 2025 Summary Slidess
minhtrietgect
 
Evolution: How True AI is Redefining Safety in Industry 4.0
vikaassingh4433
 
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
Essential Content-centric Plugins for your Website
Laura Byrne
 
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
Edge AI and Vision Alliance
 
NASA A Researcher’s Guide to International Space Station : Earth Observations
Dr. PANKAJ DHUSSA
 
Securing Model Context Protocol with Keycloak: AuthN/AuthZ for MCP Servers
Hitachi, Ltd. OSS Solution Center.
 
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pdf
ghjghvhjgc
 
UiPath DevConnect 2025: Agentic Automation Community User Group Meeting
DianaGray10
 
🚀 Let’s Build Our First Slack Workflow! 🔧.pdf
SanjeetMishra29
 
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
Next Generation AI: Anticipatory Intelligence, Forecasting Inflection Points ...
dleka294658677
 
ICONIQ State of AI Report 2025 - The Builder's Playbook
Razin Mustafiz
 
Digital Circuits, important subject in CS
contactparinay1
 
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
Wondershare Filmora Crack Free Download 2025
josanj305
 
Ad

Migrating from MongoDB to Neo4j - Lessons Learned

  • 1. Meetup Feb 17th, 2014 Migrating from MongoDB to Neo4j 1
  • 2. Agenda • Intros – name, what you do, interest in Neo4j? • Case Study, Moving from MongoDB – considerations, why and how – steps taken, findings – using the Batch Importer • Group Discussion – experiences from others? source: http://neo4j.rubyforge.org/guides/why_gra ph_db.html
  • 3. Case Study, Moving from MongoDB source: http://neo4j.rubyforge.org/guides/why_gra ph_db.html
  • 4. Our Startup – A mobile drink discovery platform: explore new drinks, post photos, learn new facts, follow other drink afficionados (whisky, beer, wine, cocktail experts) 4
  • 5. Using MongoDB – Pluses for us: • flexible (by far, most substantial benefit) • good documentation • easy to host and integrate with our code – Downsides for us: • lots of collections needed (i.e. for mapping data, many to many relationships) • queries with multiple joins 5
  • 6. Relying on Redis – Needed to cache a lot in Redis – We cached • user profile • news feed – Too much complexity • another denormalized data model to manage • more difficult to test • increase in bugs and edge cases – Still awesome, but just relied on it too much 6
  • 7. Evaluating Neo4j – Our goals • simplify data model (less denormalization) • speed up highly relational queries • keep our flexibilty (schemaless data model) – Considerations • how will we host? • will it make our codebase more complex? • support? • easy to troubleshoot production issues? 7
  • 8. How We Evaluated 1. We set up an instance on Amazon EC2 (though Heroku was still an option as well) 2. Imported realistic production data with the Batch Importer 3. Took our most popular, slowest query and tested it 4. Wrote more example queries for standard use cases (creating nodes, relationships, etc), easy to use? 5. Ran a branch of our code with Neo4j for a month 8
  • 9. How We Evaluated 1. Made sure we could get good support for the product 2. Determined effort involved in hosting it on Amazon EC2 (though Heroku was also an option) 3. Determined effort needed to import bulk data and change our data model 4. Audited each line of code and made a list of the types of queries we’d need. Estimated effort involved in updating our codebase. 5. Imported production data and took our most popular, slowest query and tested performance. 6. Wrote other more common queries and tested performance more (using Apache Benchmark) 7. Was the driver (this case Ruby) support okay and was it well-written? Would it be maintained years from now? 8. Test it out as a code branch for at least a month 9
  • 10. Our Findings 1. So far so good (been testing for a few weeks now) 2. Set up an instance on Amazon EC2. Wasn’t that bad. 3. Complex queries were a lot faster 4. Ruby driver (Neography) does the job though not perfect. 5. Planning to use Neo4j’s official Ruby library once they finish version 3.0 (which seems to not require JRuby) 10
  • 11. Our Findings 6. We needed to create an abstraction layer in the code to simplify reads and write with the database. Wasn’t that bad though. 7. Our data model got a lot more intuitive. No more map collections (yay) 8. We can now implement recommendations a lot more easily when we want to 9. No longer need to rely heavily on Redis and caching 11
  • 12. Our Findings 10.We think about our data differently now 11. Managing the data model is actually fun 12
  • 13. Tutorial on Batch Importer 1. Our example involves real data 2. We will be using Ruby to generate .CSV files representing nodes and relationships 3. Beware, existing documentation is “not good” to put it lightly 4. Using the 2.0 version! (Precompiled binary) https://github.com/jexp/batch-import/tree/20 13
  • 14. Steps 1. Install Neo4j 2. Download a binary version of batch importer 3. Batch Importer requires .CSV files. One type of file will import nodes, another will import relationships 4. Decide on fields that make nodes unique 1. ex: user has a username, a drink has a name 2. makes the process of mapping node relationships later a lot easier too 14
  • 15. .CSV Format for Nodes • Tab separated columns • Importing Nodes – node property names in first row – format is <field name>:<field type> (defaults to String) – all rows after that are corresponding property values • Importing Relationships – sepate .CSV file, source node’s unique field in first col, target node’s unique field in second col, the word “type” in the 3rd column – since we’re arleady using unique index on nodes, it’s easy to relate them! – can import multiple relations between two types of nodes in the same .CSV file 15
  • 16. Creating Drink Nodes • Example output (tab delimited) 16
  • 17. Creating Drink Nodes namespace :export do require 'csv' task :generate_drink_nodes => :environment do CSV.open("drink_nodes.csv", "wb", { :col_sep => "t" }) do |csv| csv << ["name:string:drink_name_index", "type:label", "name"] Drink.all.each do |drink| csv << [drink.name, "Drink", drink.name] end end end end 17
  • 18. Running the Script • Make sure all nodes, relationships deleted from Neo4j – MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r • Stop your Neo4j server before importing • Run the import command (per the binary batch importer we downloaded earlier): – ./import.sh ~/neo4j-community-2.0/data/graph.db user_nodes.csv 18
  • 19. Creating User Nodes • Example output (tab delimited): 19
  • 20. Creating User Nodes CSV.open("user_nodes.csv", "wb", { :col_sep => "t" }) do |csv| csv << ["username:string:user_username_index", "type:label", "first_name", "last_name"] User.all.each do |user| csv << [user.username, "User", user.first_name, user.last_name] end 20
  • 21. User to User Relationships • NOTE: it’s easy to relate users to users since we already have an index set up. • Example output (tab delimited): 21
  • 22. User to User Relationships CSV.open("user_rels.csv", "wb", { :col_sep => "t" }) do |csv| csv << ["username:string:user_username_index", "username:string:user_username_index", "type"] User.all.each do |user| user.following.each do |other_user| csv << [user.username, other_user.username, "FOLLOWS"] end user.followers.each do |other_user| csv << [other_user.username, user.username, "FOLLOWS"] end end end 22
  • 23. User to Drink Relationships • Example output: 23
  • 24. User to Drink Relationships CSV.open("user_drink_rels.csv", "wb", { :col_sep => "t" }) do |csv| csv << ["username:string:user_username_index", "name:string:drink_name_index", "type"] User.all.each do |user| user.liked_drinks.each do |drink| csv << [user.username, drink.name, "LIKED"] end user.disliked_drinks.each do |drink| csv << [user.username, drink.name, "DISLIKED"] end user.drink_journal_entries.each do |entry| csv << [user.username, entry.drink.name, "JOURNALED"] end end end 24
  • 25. Test Your Data • Test with some cypher queries – cheat sheet: http://docs.neo4j.org/refcard/2.0 – ex: MATCH(n:User)-[r:FOLLOWS]-(o) WHERE n.username='nickTribeca' RETURN n, r limit 50 • Note: you must limit your results or else the Data Browser will become too slow to use 25
  • 26. That’s the Tutorial • You can always migrate data yourself without the batch importer – ie. script to query MongoDB data and insert it to Neo4j in real time using your API • Using the Batch Importer is really fast though • Have found it faster to write and less error prone than writing my own script 26
  • 27. Group Q&A • Thanks for coming • @seenickcode • [email protected] for questions • Want to present? Let me know. 27