postgresql-async & mysql-async - async, Netty based, database drivers for MySQL and PostgreSQL written in Scala - 2.10
The main goal for this project is to implement simple, async, performant and reliable database drivers for PostgreSQL and MySQL in Scala. This is not supposed to be a JDBC replacement, these drivers aim to cover the common process of send a statement, get a response that you usually see in applications out there. So it's unlikely there will be support for updating result sets live or stuff like that.
This project always returns JodaTime when dealing with date types and not the
java.util.Date
class.
If you want information specific to the drivers, check the PostgreSQL README and the MySQL README.
You can view the project's CHANGELOG here.
- Activate Framework - full ORM solution for persisting objects using a software transactional memory (STM) layer;
- ScalikeJDBC-Async - provides an abstraction layer on top of the driver allowing you to write less SQL and make use of a nice high level database access API;
- mod-mysql-postgresql - vert.x module that integrates the driver into a vert.x application;
And if you're in a hurry, you can include them in your build like this, if you're using PostgreSQL:
"com.github.mauricio" %% "postgresql-async" % "0.2.12"
Or Maven:
<dependency>
<groupId>com.github.mauricio</groupId>
<artifactId>postgresql-async_2.10</artifactId>
<version>0.2.12</version>
</dependency>
And if you're into MySQL:
"com.github.mauricio" %% "mysql-async" % "0.2.12"
Or Maven:
<dependency>
<groupId>com.github.mauricio</groupId>
<artifactId>mysql-async_2.10</artifactId>
<version>0.2.12</version>
</dependency>
READ THIS NOW
Both clients will let you set the database encoding for something else. Unless you are 1000% sure of what you are doing, DO NOT change the default encoding (currently, UTF-8). Some people assume the connection encoding is the database or columns encoding but IT IS NOT, this is just the connection encoding that is used between client and servers doing communication.
When you change the encoding of the connection you are not affecting your database's encoding and your columns WILL NOT be stored with the connection encoding. If the connection and database/column encoding is different, your database will automatically translate from the connection encoding to the correct encoding and all your data will be safely stored at your database/column encoding.
This is as long as you are using the correct string types, BLOB columns will not be translated since they're supposed to hold a stream of bytes.
So, just don't touch it, create your tables and columns with the correct encoding and be happy.
If you have used JDBC before, you might have heard that prepared statements are the best thing on earth when talking to databases. This isn't exactly true all the time (as you can see on this presentation by @tenderlove) and there is a memory cost in keeping prepared statements.
Prepared statements are tied to a connection, they are not database-wide, so, if you generate your queries dynamically all the time you might eventually blow up your connection memory and your database memory.
Why?
Because when you create a prepared statement, locally, the connection keeps the prepared statement description in memory. This can be the returned columns information, input parameters information, query text, query identifier that will be used to execute the query and other flags. This also causes a data structure to be created at your server for every connection.
So, prepared statements are awesome, but are not free. Use them judiciously.
- fast, fast and faster
- small memory footprint
- avoid copying data as much as possible (we're always trying to use wrap and slice on buffers)
- easy to use, call a method, get a future or a callback and be happy
- never, ever, block
- all features covered by tests
- next to zero dependencies (it currently depends on Netty, JodaTime and SFL4J only)
- more authentication mechanisms
- benchmarks
- more tests (run the
jacoco:cover
sbt task and see where you can improve) - timeout handler for initial handshare and queries
- checkout the source code
- find bugs, find places where performance can be improved
- check the What is missing piece
- check the issues page for bugs or new features
- send a pull request with specs