1. Introduction to Jdbi 3
Jdbi provides convenient, idiomatic access to relational data in Java. Jdbi 3 is the third major release which introduces enhanced support for Java 8, countless refinements to the design and implementation, and enhanced support for modular plugins.
Jdbi is built on top of JDBC. If your database has a JDBC driver, you can use Jdbi with it. Jdbi improves JDBC’s rough interface, providing a more natural Java database interface that is easy to bind to your domain data types. Unlike an ORM, we do not aim to provide a complete object relational mapping framework - instead of that hidden complexity, we provide building blocks that allow you to construct the mapping between relations and objects as appropriate for your application.
2. JDBI API
Jdbi’s API comes in two flavors:
2.1. Fluent API
The Core API provides a fluent, imperative interface. Use Builder style objects to wire up your SQL to rich Java data types.
Jdbi jdbi = Jdbi.create("jdbc:h2:mem:test"); // (H2 in-memory database)
List<User> users = jdbi.withHandle(handle -> {
handle.execute("CREATE TABLE \"user\" (id INTEGER PRIMARY KEY, \"name\" VARCHAR)");
// Inline positional parameters
handle.execute("INSERT INTO \"user\" (id, \"name\") VALUES (?, ?)", 0, "Alice");
// Positional parameters
handle.createUpdate("INSERT INTO \"user\" (id, \"name\") VALUES (?, ?)")
.bind(0, 1) // 0-based parameter indexes
.bind(1, "Bob")
.execute();
// Named parameters
handle.createUpdate("INSERT INTO \"user\" (id, \"name\") VALUES (:id, :name)")
.bind("id", 2)
.bind("name", "Clarice")
.execute();
// Named parameters from bean properties
handle.createUpdate("INSERT INTO \"user\" (id, \"name\") VALUES (:id, :name)")
.bindBean(new User(3, "David"))
.execute();
// Easy mapping to any type
return handle.createQuery("SELECT * FROM \"user\" ORDER BY \"name\"")
.mapToBean(User.class)
.list();
});
assertThat(users).containsExactly(
new User(0, "Alice"),
new User(1, "Bob"),
new User(2, "Clarice"),
new User(3, "David"));
2.2. Declarative API
The SQL Object extension sits atop Core, and provides a declarative interface.
Tell Jdbi what SQL to execute and the shape of the results you like by declaring
an annotated Java interface
, and it will provide the implementation.
// Define your own declarative interface
public interface UserDao {
@SqlUpdate("CREATE TABLE \"user\" (id INTEGER PRIMARY KEY, \"name\" VARCHAR)")
void createTable();
@SqlUpdate("INSERT INTO \"user\" (id, \"name\") VALUES (?, ?)")
void insertPositional(int id, String name);
@SqlUpdate("INSERT INTO \"user\" (id, \"name\") VALUES (:id, :name)")
void insertNamed(@Bind("id") int id, @Bind("name") String name);
@SqlUpdate("INSERT INTO \"user\" (id, \"name\") VALUES (:id, :name)")
void insertBean(@BindBean User user);
@SqlQuery("SELECT * FROM \"user\" ORDER BY \"name\"")
@RegisterBeanMapper(User.class)
List<User> listUsers();
}
Jdbi jdbi = Jdbi.create("jdbc:h2:mem:test");
jdbi.installPlugin(new SqlObjectPlugin());
// Jdbi implements your interface based on annotations
List<User> userNames = jdbi.withExtension(UserDao.class, dao -> {
dao.createTable();
dao.insertPositional(0, "Alice");
dao.insertPositional(1, "Bob");
dao.insertNamed(2, "Clarice");
dao.insertBean(new User(3, "David"));
return dao.listUsers();
});
assertThat(userNames).containsExactly(
new User(0, "Alice"),
new User(1, "Bob"),
new User(2, "Clarice"),
new User(3, "David"));
Jdbi has a flexible plugin architecture which makes it easy to fold in support for your favorite libraries (Guava, JodaTime, Spring, Vavr) or database vendor (H2, Oracle, Postgres).
Jdbi is not an ORM. There is no session cache, change tracking, "open session in view", or cajoling the library to understand your schema.
Instead, Jdbi provides straightforward mapping between SQL and simple tabular data structures.
You bring your own SQL, and Jdbi only runs the commands you tell it to—the way God intended.
Already using Jdbi v2? See Upgrading from v2 to v3. |
3. Getting Started
Jdbi is easy to include in your Java project - with an Apache 2.0 license, few external dependencies, and JARs distributed through Maven Central, you can just include the relevant artifacts in your POM:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-bom</artifactId>
<type>pom</type>
<version>3.33.0</version>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Then, in your <dependencies>
section, declare a dependency for each Jdbi
module you want to use:
<dependencies>
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-core</artifactId>
</dependency>
</dependencies>
Jdbi provides several other modules, which enhance the core API with additional features.
3.1. Java Compatibility
Jdbi
runs on all Java versions 8 or later. 11 or later is required to build.
Java 8 support is considered deprecated and will be maintained best-effort for now,
but will be going away soon! In order to run on 8, you might need to dependency-manage your
caffeine version back to 2.x. 3.x is required to run on newer JDKs, but will not run on 8.
|
3.2. Modules
- jdbi3-sqlobject
-
The SQL Object extension. Most Jdbi users use this.
- jdbi3-guava
-
Support for Guava’s collection and Optional types.
- jdbi3-jodatime2
-
Support for JodaTime v2’s DateTime type.
- jdbi3-jpa
-
Minimal support for JPA annotations.
- jdbi3-kotlin
-
Automatically map kotlin data classes.
- jdbi3-kotlin-sqlobject
-
Enhance the SQL Object extension to support Kotlin default methods and method default parameters.
- jdbi3-oracle12
-
Support Oracle returning DML statements.
- jdbi3-postgres
-
Support for most data types supported by Postgres driver.
- jdbi3-spring5
-
Provides a factory bean to set up Jdbi singleton.
- jdbi3-stringtemplate4
-
Use the StringTemplate4 template engine, instead of JDBI’s built in engine.
- jdbi3-vavr
-
Support for Vavr Tuples, Collections and Value arguments
3.3. Forethoughts
You’ll probably want to add our annotations org.jdbi.v3.meta.Beta and org.jdbi.v3.meta.Alpha to your IDE’s "unstable API usage" blacklist.
|
Our org.jdbi.*.internal packages are not considered public API; their contents may change radically without warning.
|
4. Core API
4.1. Jdbi
The Jdbi class is the main entry point into the library.
Each Jdbi
instance wraps a JDBC
DataSource.
It is also a repository of configuration for your database session.
There are a few ways to create a Jdbi
instance. You can use a JDBC URL:
// H2 in-memory database
Jdbi jdbi = Jdbi.create("jdbc:h2:mem:test");
If you have a DataSource
object, you can use that directly:
DataSource ds = ...
Jdbi jdbi = Jdbi.create(ds);
Jdbi
instances are thread-safe and do not own any database resources.
Typically applications create a single, shared Jdbi
instance, and set up
any common configuration there. See Configuration for more details.
Jdbi
itself does not provide connection pooling or other High Availability
features, but it can be combined with other software that does.
In a more limited scope (such as an HTTP request, or event callback),
you would then request a Handle
object from your Jdbi
instance.
4.2. Handle
Handles represent an active database connection.
Handle is used to prepare and run SQL statements against the database, and manage database transactions. It provides access to fluent statement APIs that can bind arguments, execute the statement, and then map any results into Java objects.
A Handle
inherits configuration from the Jdbi
at the time it is created.
See Configuration for more details.
Because Handle holds an open connection, care must be taken to ensure
that each handle is closed when you are done with it. Failure to close
Handles will eventually overwhelm your database with open connections, or
drain your connection pool.
|
There are a few ways to obtain a Handle
instance at runtime.
If your operation will return some result, use jdbi.withHandle():
List<String> names = jdbi.withHandle(handle ->
handle.createQuery("select name from contacts")
.mapTo(String.class)
.list());
assertThat(names).contains("Alice", "Bob");
If your operation does not need to return a result,
use Jdbi.useHandle(HandleConsumer)
:
jdbi.useHandle(handle -> {
handle.execute("create table contacts (id int primary key, name varchar(100))");
handle.execute("insert into contacts (id, name) values (?, ?)", 1, "Alice");
handle.execute("insert into contacts (id, name) values (?, ?)", 2, "Bob");
});
Both withHandle
and useHandle
open a temporary handle, call your callback,
and immediately release the handle when your callback returns.
You may notice the "consumer" vs "callback" naming pattern in a few
places in Jdbi. Callbacks return a value, and are coupled to with- methods.
Consumers do not return a value, and are coupled to use- methods.
|
Alternatively, if you want to manage the lifecycle of the handle yourself,
use jdbi.open()
:
try (Handle handle = jdbi.open()) {
result = handle.execute("insert into contacts (id, name) values (?, ?)", 3, "Chuck");
}
When using jdbi.open() , you should always use try-with-resources or a
try-finally block to ensure the database connection is released. Failing to
release the handle will leak connections. We recommend using withHandle
or useHandle over open whenever possible.
|
4.3. Arguments
Arguments are Jdbi’s representation of JDBC statement parameters (the ?
in select * from Foo where bar = ?
).
To set a parameter ?
on a JDBC PreparedStatement
, you would ps.setString(1, "Baz")
.
With Jdbi, when you bind the string "Baz"
, it will search through all registered ArgumentFactory instances
until it finds one that is willing to convert the String into an Argument.
The argument is responsible for setting the String for the placeholder exactly as setString
does.
Arguments can perform more advanced bindings than simple JDBC supports: a BigDecimal could be bound as a SQL decimal, a java.time.Year as a SQL int, or a complex object could be serialized to a byte array and bound as a SQL blob.
The use of Jdbi arguments is limited to JDBC prepared statement parameters.
Notably, arguments usually cannot be used to change the structure of a query
(for example the table or column name, SELECT or INSERT , etc.)
nor may they be interpolated into string literals.
See Templating and TemplateEngine for more information.
|
4.3.1. Positional Arguments
When a SQL statement uses ?
tokens, Jdbi can bind a values to parameters
at the corresponding index (0-based):
handle.createUpdate("insert into contacts (id, name) values (?, ?)")
.bind(0, 3)
.bind(1, "Chuck")
.execute();
String name = handle.createQuery("select name from contacts where id = ?")
.bind(0, 3)
.mapTo(String.class)
.one();
4.3.2. Named Arguments
When a SQL statement uses colon-prefixed tokens like :name
, Jdbi can bind
parameters by name:
handle.createUpdate("insert into contacts (id, name) values (:id, :name)")
.bind("id", 3)
.bind("name", "Chuck")
.execute();
String name = handle.createQuery("select name from contacts where id = :id")
.bind("id", 3)
.mapTo(String.class)
.one();
This :foo syntax is default behavior that can be changed; see the ColonPrefixSqlParser class. Jdbi alternatively provides support for #foo syntax out-of-the-box, and you can create your own as well.
|
Mixing named and positional arguments is not allowed, as it would become confusing very quickly. |
4.3.3. Supported Argument Types
Out of the box, Jdbi supports the following types as SQL statement arguments:
-
Primitives:
boolean
,byte
,short
,int
,long
,char
,float
, anddouble
-
java.lang:
Boolean
,Byte
,Short
,Integer
,Long
,Character
,Float
,Double
,String
, andEnum
(stored as the enum value’s name by default) -
java.math:
BigDecimal
-
java.net:
Inet4Address
,Inet6Address
,URL
, andURI
-
java.sql:
Blob
,Clob
,Date
,Time
, andTimestamp
-
java.time:
Instant
,LocalDate
,LocalDateTime
,LocalTime
,OffsetDateTime
,ZonedDateTime
, andZoneId
-
java.util:
Date
,Optional
(around any other supported type), andUUID
-
java.util.Collection
and Java arrays (stored as SQL arrays). Some additional setup may be required depending on the type of array element.
You can also configure Jdbi to support additional argument types. More on that later.
4.3.4. Binding Arguments
Arguments to SQL statement can be bound in a few different ways.
You can bind individual arguments:
handle.createUpdate("insert into contacts (id, name) values (:id, :name)")
.bind("id", 1)
.bind("name", "Alice")
.execute();
You can bind multiple arguments at once from the entries of a Map
:
Map<String, Object> contact = new HashMap<>();
contact.put("id", 2)
contact.put("name", "Bob");
handle.createUpdate("insert into contacts (id, name) values (:id, :name)")
.bindMap(contact)
.execute();
You can bind multiple values at once, from either a List<T>
or a vararg:
List<String> keys = new ArrayList<String>()
keys.add("user_name");
keys.add("street");
handle.createQuery("SELECT value FROM items WHERE kind in (<listOfKinds>)")
.bindList("listOfKinds", keys)
.mapTo(String.class)
.list();
// Or, using the 'vararg' definition
handle.createQuery("SELECT value FROM items WHERE kind in (<varargListOfKinds>)")
.bindList("varargListOfKinds", "user_name", "docs", "street", "library")
.mapTo(String.class)
.list();
Using bindList requires writing your SQL with an attribute, not a binding,
despite the fact that your values are bound. The attribute is a placeholder that will be
safely rendered to a comma-separated list of binding placeholders.
|
You can bind multiple arguments from properties of a Java Bean:
Contact contact = new Contact();
contact.setId(3);
contact.setName("Cindy");
handle.createUpdate("insert into contacts (id, name) values (:id, :name)")
.bindBean(contact)
.execute();
You can also bind an Object’s public fields:
Object contact = new Object() {
public int id = 0;
public String name = "Cindy";
};
handle.createUpdate("insert into contacts (id, name) values (:id, :name)")
.bindFields(contact)
.execute();
Or you can bind public, parameterless methods of an Object:
Object contact = new Object() {
public int theId() {
return 0;
}
public String theName() {
return "Cindy";
}
};
handle.createUpdate("insert into contacts (id, name) values (:theId, :theName)")
.bindMethods(contact)
.execute();
Optionally, you can qualify each bound bean/object with a prefix. This can help remove ambiguity in situations where two or more bound beans have similar property names:
Folder folder = new Folder(1, "Important Documents");
Document document =
new Document(100, "memo.txt", "Business business business. Numbers.");
handle.createUpdate("insert into documents (id, folder_id, name, contents) " +
"values (:d.id, :f.id, :d.name, :d.contents)")
.bindBean("f", folder)
.bindMethods("f", folder)
.bindFields("d", document)
.execute();
bindBean() , bindFields() , and bindMethods() may be used to bind nested
properties, e.g. :user.address.street .
|
bindMap() does not bind nested properties—map keys are expected to exactly
match the bound parameter name.
|
The authors recommend checking out Immutables support for an advanced way to easily bind and map value types. |
4.3.5. Custom Arguments
Occasionally your data model will use data types not natively supported by Jdbi (see Supported Argument Types).
Fortunately, Jdbi can be configured to bind custom data types as arguments, by implementing a few simple interfaces.
Core JDBC features are generally well supported by all database vendors. However, more advanced usages like array support or geometry types tend to quickly become vendor-specific. |
Argument
The Argument interface wraps a single value into a binding.
static class UUIDArgument implements Argument {
private UUID uuid;
UUIDArgument(UUID uuid) {
this.uuid = uuid;
}
@Override
public void apply(int position, PreparedStatement statement, StatementContext ctx)
throws SQLException {
statement.setString(position, uuid.toString()); (1)
}
}
@Test
public void uuidArgument() {
UUID u = UUID.randomUUID();
assertThat(handle.createQuery("SELECT CAST(:uuid AS VARCHAR)")
.bind("uuid", new UUIDArgument(u))
.mapTo(String.class)
.one()).isEqualTo(u.toString());
}
1 | Since Argument usually directly calls into JDBC directly, it is given the one-based index (as expected by JDBC) when it is applied. |
Here we use an Argument to directly bind a UUID. In this particular case, the most obvious approach is to send the UUID to the database as a String. If your JDBC driver supports custom types directly or efficient binary transfers, you can leverage them easily here.
ArgumentFactory
The ArgumentFactory
interface provides Argument instances for any data type it knows about. By
implementing and registering an argument factory, it is possible to bind
custom data types without having to explicitly wrap them in Argument
objects.
Jdbi provides an AbstractArgumentFactory
class which simplifies implementing
the ArgumentFactory
contract:
static class UUIDArgumentFactory extends AbstractArgumentFactory<UUID> {
UUIDArgumentFactory() {
super(Types.VARCHAR); (1)
}
@Override
protected Argument build(UUID value, ConfigRegistry config) {
return (position, statement, ctx) -> statement.setString(position, value.toString()); (2)
}
}
@Test
public void uuidArgumentFactory() {
UUID u = UUID.randomUUID();
handle.registerArgument(new UUIDArgumentFactory());
assertThat(handle.createQuery("SELECT CAST(:uuid AS VARCHAR)")
.bind("uuid", u)
.mapTo(String.class)
.one()).isEqualTo(u.toString());
}
1 | The JDBC SQL type constant to use when
binding UUIDs. Jdbi needs this in order to bind UUID values of null . See
PreparedStatement.setNull(int,int) |
2 | Since Argument is a functional interface, it can be implemented as a
simple lambda expression. |
Prepared Arguments
Traditional argument factories decide to bind based on both the type and actual value of the binding.
This is very flexible but when binding a large PreparedBatch
it incurs a serious performance penalty
as the entire chain of argument factories must be consulted for each batch of arguments added.
To address this issue, implement ArgumentFactory.Preparable
which promises to handle all values
of a given Type
. Most built in argument factories now implement the Preparable interface.
Preparable argument factories are consulted before traditional argument factories. If you’d prefer
to keep the old behavior, you may disable this feature with getConfig(Arguments.class).setPreparedArgumentsEnabled(false)
.
Arguments Registry
When you register an ArgumentFactory
, the registration is stored in an
Arguments instance held by Jdbi.
Arguments
is a configuration class which stores all registered argument
factories (including the factories for built-in arguments).
Under the hood, when you bind arguments to a statement, Jdbi consults the
Arguments
config object and searches for an ArgumentFactory
which knows how
to convert a bound object into an Argument
.
Later, when the statement is executed, each Argument
located during binding
is applied to the JDBC
PreparedStatement.
Occasionally, two or more argument factories will support arguments of the same data type. When this happens, the last-registered factory wins. Preparable argument factories always take precedence over base argument factories. This means that you can override the way any data type is bound, including the data types supported out of the box. |
4.4. Queries
A Query is a result-bearing SQL statement that returns a result set from the database.
List<Map<String, Object>> users =
handle.createQuery("SELECT id, \"name\" FROM \"user\" ORDER BY id ASC")
.mapToMap()
.list();
assertThat(users).containsExactly(
map("id", 1, "name", "Alice"),
map("id", 2, "name", "Bob"));
To get a single row from a query, there are a few possible methods to use, depending on the number of rows that might be in the result set.
Call one()
when you expect the result to contain exactly one row. Returns
null
only if the returned row maps to null
. Throws an exception if the
result has zero or multiple rows.
String name = handle.select("select name from users where id = ?", 3)
.mapTo(String.class)
.one();
Call findOne()
when you expect the result to contain zero or one row. Returns
Optional.empty()
if there are no rows, or one row that maps to null
. Throws
an exception if the result has multiple rows.
Optional<String> name = handle.select(...)
.mapTo(String.class)
.findOne();
Call first()
when you expect the result to contain at least one row. Returns
null
if the first row maps to null
. Throws an exception if the result has
zero rows.
String name = handle.select("select name from users where id = ?", 3)
.mapTo(String.class)
.first();
Call findFirst()
when the result may contain any number of rows. Returns
Optional.empty()
if there are no rows, or the first row maps to null
.
Optional<String> name = handle.select(...)
.mapTo(String.class)
.findFirst();
Multiple result rows can be returned in a list:
List<String> name = handle.createQuery(
"select title from films where genre = :genre order by title")
.bind("genre", "Action")
.mapTo(String.class)
.list();
For other collections, use collect()
with a
collector:
Set<String> name = handle.createQuery(
"select title from films where genre = :genre order by title")
.bind("genre", "Action")
.mapTo(String.class)
.collect(Collectors.toSet());
You can also stream results:
handle.createQuery(
"select title from films where genre = :genre order by title")
.mapTo(String.class)
.useStream(stream -> {
// do stuff with stream
});
Thus far, all examples have shown a String
result type. Of course, you can
map to many other data types:
LocalDate releaseDate = handle.createQuery(
"select release_date from films where name = :name")
.bind("name", "Star Wars: A New Hope")
.mapTo(LocalDate.class)
.one();
4.5. Mappers
Jdbi makes use of mappers to convert result data into Java objects. There are two types of mappers:
-
Row Mappers, which map a full row of result set data.
-
Column Mappers, which map a single column of a result set row.
4.5.1. Row Mappers
RowMapper is a functional interface, which maps the current row of a JDBC ResultSet to a mapped type. Row mappers are invoked once for each row in the result set.
Since RowMapper
is a functional interface, they can be provided inline to a
query using a lambda expression:
List<User> users = handle.createQuery("SELECT id, \"name\" FROM \"user\" ORDER BY id ASC")
.map((rs, ctx) -> new User(rs.getInt("id"), rs.getString("name")))
.list();
There are three different types being used in the above example. Query ,
returned by Handle.createQuery() , implements the ResultBearing interface.
The ResultBearing.map() method takes a RowMapper<T> and returns a
ResultIterable<T> . Finally, ResultBearing.list() collects each row in the
result set into a List<T> .
|
Row mappers may be defined as classes, which allows for re-use:
class UserMapper implements RowMapper<User> {
@Override
public User map(ResultSet rs, StatementContext ctx) throws SQLException {
return new User(rs.getInt("id"), rs.getString("name"));
}
}
List<User> users = handle.createQuery("SELECT id, \"name\" FROM \"user\" ORDER BY id ASC")
.map(new UserMapper())
.list();
This RowMapper
is equivalent to the lambda mapper above but more explicit.
RowMappers registry
Row mappers can be registered for particular types. This simplifies usage, requiring only that you specify what type you want to map to. Jdbi automatically looks up the mapper from the registry, and uses it.
jdbi.registerRowMapper(User.class,
(rs, ctx) -> new User(rs.getInt("id"), rs.getString("name"));
try (Handle handle = jdbi.open()) {
List<User> users = handle.createQuery("SELECT id, name FROM user ORDER BY id ASC")
.mapTo(User.class)
.list();
}
A mapper which implements RowMapper
with an explicit mapped type (such as the
UserMapper
class in the previous section) may be registered without specifying
the mapped type:
handle.registerRowMapper(new UserMapper());
When this method is used, Jdbi inspects the generic class signature of the mapper to automatically discover the mapped type.
It is possible to register more than one mapper for any given type. When this happens, the last-registered mapper for a given type takes precedence. This permits optimizations, like registering a "default" mapper for some type, while allowing that default mapper to be overridden with a different one when appropriate.
RowMapperFactory
A RowMapperFactory can produce row mappers for arbitrary types.
Implementing a factory might be preferable to a regular row mapper if:
-
The mapper implementation is generic, and could apply to multiple mapped types. For example, Jdbi provides a generalized BeanMapper, which maps columns to bean properties for any bean class.
-
The mapped type has a generic signature, and/or the mapper could be composed from other registered mappers. For example, Jdbi provides a Map.Entry<K,V> mapper, provided a mapper is registered for types
K
andV
. -
You want to bundle multiple mappers into a single class.
Let’s take an example Pair<L, R>
class:
public final class Pair<L, R> {
public final L left;
public final R right;
public Pair(L left, R right) {
this.left = left;
this.right = right;
}
}
Now, let’s implement a row mapper factory. The factory should produce a
RowMapper<Pair<L, R>>
for any Pair<L, R>
type, where the L
type is mapped
from the first column, and R
from the second—assuming there are column
mappers registered for both L
and R
.
Let’s take this one step at a time:
public class PairMapperFactory implements RowMapperFactory {
public Optional<RowMapper<?>> build(Type type, ConfigRegistry config) {
...
}
}
The build
method accepts a mapped type, and a config registry. It may return
Optional.of(someMapper)
if it knows how to map that type, or
Optional.empty()
otherwise.
First we check whether the mapped type is a Pair
:
if (!Pair.class.equals(GenericTypes.getErasedType(type))) {
return Optional.empty();
}
The GenericTypes utility class is discussed in Working with Generic Types.
|
Next, we extract the L
and R
generic parameters from the mapped type:
Type leftType = GenericTypes.resolveType(Pair.class.getTypeParameters()[0], type);
Type rightType = GenericTypes.resolveType(Pair.class.getTypeParameters()[1], type);
In the first line, Pair.class.getTypeParameters()[0]
gives the type variable
L
. Likewise in the second line, Pair.class.getTypeParameters()[1]
gives the type
variable R
.
We use resolveType()
to resolve the types for the L
and R
type variables
in the context of the mapped type.
Now that we have the types for L
and R
, we can look up the column mappers
for those types from the ColumnMappers
config class, through the config
registry:
ColumnMappers columnMappers = config.get(ColumnMappers.class);
ColumnMapper<?> leftMapper = columnMappers.findFor(leftType)
.orElseThrow(() -> new NoSuchMapperException(
"No column mapper registered for Pair left parameter " + leftType));
ColumnMapper<?> rightMapper = columnMappers.findFor(rightType)
.orElseThrow(() -> new NoSuchMapperException(
"No column mapper registered for Pair right parameter " + rightType));
The config registry is a locator for config classes. So when we call
config.get(ColumnMappers.class)
, we get back a ColumnMappers
instance with
the current column mapper configuration.
Next we call ColumnMappers.findFor()
to get the column mappers for the left
and right types.
You may have noticed that although this method can return Optional , we’re
throwing an exception if we can’t find the left- or right-hand mappers. We’ve
found this to be a best practice: return Optional.empty() if the factory
knows nothing about the mapped type (Pair , in this case). If it knows the
mapped type but is missing some configuration to make it work (e.g. mappers not
registered for L or R parameter) it is more helpful to throw an exception
with an informative message, so users can diagnose why the mapper isn’t
working as expected.
|
Finally, we construct a pair mapper, and return it:
RowMapper<?> pairMapper = (rs, ctx) ->
new Pair(leftMapper.map(rs, 1, ctx), // In JDBC, column numbers start at 1
rightMapper.map(rs, 2, ctx));
return Optional.of(pairMapper);
Here is the factory class all together:
public class PairMapperFactory implements RowMapperFactory {
public Optional<RowMapper<?>> build(Type type, ConfigRegistry config) {
if (!Pair.class.equals(GenericTypes.getErasedType(type))) {
return Optional.empty();
}
Type leftType = GenericTypes.resolveType(Pair.class.getTypeParameters()[0], type);
Type rightType = GenericTypes.resolveType(Pair.class.getTypeParameters()[1], type);
ColumnMappers columnMappers = config.get(ColumnMappers.class);
ColumnMapper<?> leftMapper = columnMappers.findFor(leftType)
.orElseThrow(() -> new NoSuchMapperException(
"No column mapper registered for Pair left parameter " + leftType));
ColumnMapper<?> rightMapper = columnMappers.findFor(rightType)
.orElseThrow(() -> new NoSuchMapperException(
"No column mapper registered for Pair right parameter " + rightType));
RowMapper<?> pairMapper = (rs, ctx) ->
new Pair(leftMapper.map(rs, 1, ctx),
rightMapper.map(rs, 2, ctx));
return Optional.of(pairMapper);
}
}
Row mapper factories may be registered similar to regular row mappers:
jdbi.registerRowMapper(new PairMapperFactory());
try (Handle handle = jdbi.open()) {
List<Pair<String, String>> configPairs = handle
.createQuery("SELECT key, value FROM config")
.mapTo(new GenericType<Pair<String, String>>() {})
.list();
}
The GenericType utility class is discussed in Working with Generic Types.
|
4.5.2. Column Mappers
ColumnMapper is a functional interface, which maps a column from the current row of a JDBC ResultSet to a mapped type.
Since ColumnMapper
is a functional interface, they can be provided inline to a
query using a lambda expression:
List<Money> amounts = handle
.select("select amount from transactions where account_id = ?", accountId)
.map((rs, col, ctx) -> Money.parse(rs.getString(col))) (1)
.list();
Whenever a column mapper is used to map rows, only the first column of each row is mapped.
Column mappers may be defined as classes, which allows for re-use:
public class MoneyMapper implements ColumnMapper<Money> {
public Money map(ResultSet r, int columnNumber, StatementContext ctx) throws SQLException {
return Money.parse(r.getString(columnNumber));
}
}
List<Money> amounts = handle
.select("select amount from transactions where account_id = ?", accountId)
.map(new MoneyMapper())
.list();
This ColumnMapper
is equivalent to the lambda mapper above, but more explicit.
ColumnMappers registry
Column mappers may be registered for specific types. This simplifies usage, requiring only that you specify what type you want to map to. Jdbi automatically looks up the mapper from the registry, and uses it.
jdbi.registerColumnMapper(Money.class,
(rs, col, ctx) -> Money.parse(rs.getString(col)));
List<Money> amounts = jdbi.withHandle(handle ->
handle.select("select amount from transactions where account_id = ?", accountId)
.mapTo(Money.class)
.list());
A mapper which implements ColumnMapper
with an explicit mapped type (such as
the MoneyMapper
class in the previous section) may be registered without
specifying the mapped type:
handle.registerColumnMapper(new MoneyMapper());
When this method is used, Jdbi inspects the generic class signature of the mapper to automatically discover the mapped type.
It is possible to register more than one mapper for any given type. When this happens, the last-registered mapper for a given type takes precedence. This permits optimizations, like registering a "default" mapper for some type, while allowing that default mapper to be overridden with a different one when appropriate.
Out of the box, column mappers are registered for the following types:
-
Primitives:
boolean
,byte
,short
,int
,long
,char
,float
, anddouble
-
java.lang:
Boolean
,Byte
,Short
,Integer
,Long
,Character
,Float
,Double
,String
, andEnum
(stored as the enum value’s name by default) -
java.math:
BigDecimal
-
byte[]
arrays (e.g. for BLOB or VARBINARY columns) -
java.net:
InetAddress
,URL
, andURI
-
java.sql:
Timestamp
-
java.time:
Instant
,LocalDate
,LocalDateTime
,LocalTime
,OffsetDateTime
,ZonedDateTime
, andZoneId
-
java.util:
UUID
-
java.util.Collection
and Java arrays (for array columns). Some additional setup may be required depending on the type of array element—see SQL Arrays.
The binding and mapping method for enum values can be controlled via the Enums config, as well as the EnumByName and EnumByOrdinal annotations. |
ColumnMapperFactory
A ColumnMapperFactory can produce column mappers for arbitrary types.
Implementing a factory might be preferable to a regular column mapper if:
-
The mapper class is generic, and could apply to multiple mapped types.
-
The type being mapped is generic, and/or the mapper could be composed from other registered mappers.
-
You want to bundle multiple mappers into a single class.
Let’s create a mapper factory for Optional<T>
as an example. The factory
should produce a ColumnMapper<Optional<T>>
for any T
, provided a column
mapper is registered for T
.
Let’s take this one step at a time:
public class OptionalColumnMapperFactory implements ColumnMapperFactory {
public Optional<ColumnMapper<?>> build(Type type, ConfigRegistry config) {
...
}
}
The build
method accepts a mapped type, and a config registry. It may return
Optional.of(someMapper)
if it knows how to map that type, or
Optional.empty()
otherwise.
First, we check whether the mapped type is an Optional
:
if (!Optional.class.equals(GenericTypes.getErasedType(type))) {
return Optional.empty();
}
The GenericTypes utility class is discussed in Working with Generic Types.
|
Next, extract the T
generic parameter from the mapped type:
Type t = GenericTypes.resolveType(Optional.class.getTypeParameters()[0], type);
The expression Optional.class.getTypeParameters()[0]
gives the type variable
T
.
We use resolveType()
to resolve the type of T
in the context of the mapped
type.
Now that we have the type of T
, we can look up a column mapper for that type
from the ColumnMappers
config class, through the config registry:
ColumnMapper<?> tMapper = config.get(ColumnMappers.class)
.findFor(embeddedType)
.orElseThrow(() -> new NoSuchMapperException(
"No column mapper registered for parameter " + embeddedType + " of type " + type));
The config registry is a locator for config classes. So when we call
config.get(ColumnMappers.class)
, we get back a ColumnMappers
instance with
the current column mapper configuration.
Next we call ColumnMappers.findFor()
to get the column mapper for the embedded
type.
You may have noticed that although this method can return Optional , we’re
throwing an exception if we can’t find a mapper for the embedded type. We’ve
found this to be a best practice: return Optional.empty() if the factory knows
nothing about the mapped type (Optional , in this case). If it knows the mapped
type but is missing some configuration to make it work (e.g. no mapper
registered for tye T parameter) it is more helpful to throw an exception with
an informative message, so users can diagnose why the mapper isn’t working as
expected.
|
Finally, we construct the column mapper for optionals, and return it:
ColumnMapper<?> optionalMapper = (rs, col, ctx) ->
Optional.ofNullable(tMapper.map(rs, col, ctx));
return Optional.of(optionalMapper);
Here is the factory class all together:
public class OptionalColumnMapperFactory implements ColumnMapperFactory {
public Optional<ColumnMapper<?>> build(Type type, ConfigRegistry config) {
if (!Optional.class.equals(GenericTypes.getErasedType(type))) {
return Optional.empty();
}
Type t = GenericTypes.resolveType(Optional.class.getTypeParameters()[0], type);
ColumnMapper<?> tMapper = config.get(ColumnMappers.class)
.findFor(t)
.orElseThrow(() -> new NoSuchMapperException(
"No column mapper registered for parameter " + t + " of type " + type));
ColumnMapper<?> optionalMapper = (rs, col, ctx) ->
Optional.ofNullable(tMapper.map(rs, col, ctx));
return Optional.of(optionalMapper);
}
}
Column mapper factories may be registered similar to regular column mappers:
jdbi.registerColumnMapper(new OptionalColumnMapperFactory());
try (Handle handle = jdbi.open()) {
List<Optional<String>> middleNames = handle
.createQuery("select middle_name from contacts")
.mapTo(new GenericType<Optional<String>>() {})
.list();
}
The GenericType utility class is discussed in Working with Generic Types.
|
4.5.3. Primitive Mapping
All Java primitive types have default mappings to their corresponding JDBC types. Generally, Jdbi will automatically perform boxing and unboxing as appropriate when it encounters wrapper types.
By default, SQL null
mapped to a primitive type will adopt the Java default value.
This may be disabled by configuring
jdbi.getConfig(ColumnMappers.class).setCoalesceNullPrimitivesToDefaults(false)
.
4.5.4. Immutables Mapping
Immutables
value objects may be mapped, see the Immutables section for details.
4.5.5. Freebuilder Mapping
Freebuilder
value objects may be mapped, see the Freebuilder section for details.
4.5.6. Reflection Mappers for Beans and POJOs
Jdbi provides a number of reflection-based mappers out of the box which treat column names as attributes. The different mappers can be used to map values onto
-
Java Bean properties (
BeanMapper
) -
Class fields (
FieldMapper
) -
Constructor parameters (
ConstructorMapper
)
There is also an equivalent mapper for Kotlin classes that supports constructor arguments and class properties.
Reflective mappers are snake_case aware and will automatically match up these columns to camelCase field/argument/property names.
Supported attribute annotations
Reflection mappers support the following annotations. Unless otherwise noted, all annotations are supported on bean getters and setters for the BeanMapper
, constructor parameters for the ConstructorMapper
and bean fields for the FieldMapper
.
-
@ColumnName
allows explicit naming of the column that is mapped to the specific attribute. -
@Nested
for nested beans. Without this annotation, any attribute is treated as mappable from a single column. This annotation creates a mapper for the nested bean. There is a limitation that only one type of mapper can be nested at a time;BeanMapper
will create anotherBeanMapper
,ConstructorMapper
will create anotherConstructorMapper
etc.@Nested
supports an optional prefix for all attributes in the nested bean (@Nested("prefix")
). -
@PropagateNull
on a given attribute propagates a null value for a specific attribute up. So if the attribute or column is null, the whole bean will be discarded and a null value will be used instead of the bean itself. This annotation can also be used on the bean class with a column name that must be present in the result set. When used on an attribute, no value must be set. -
@Nullable
forConstructorMapper
arguments. If no column is mapped onto a specific constructor argument, don’t fail but usenull
as value. Any annotation calledNullable
is accepted. -
@Unmappable
forBeanMapper
bean properties andFieldMapper
fields. Any attribute with this annotation will be ignored.
The @ColumnName annotation only applies while mapping SQL data into Java
objects. When binding object properties (e.g. with bindBean() ), bind the
property name (:id ) rather than the column name (:user_id ).
|
ConstructorMapper
ConstructorMapper assigns columns to constructor parameters by
name. If the java compiler stores method parameter names in the java
classes (using the -parameters
flag on the compiler — see also
Compiling with Parameter Names), the constructor mapper will use
these names to select columns.
public User(int id, String name) {
this.id = id;
this.name = name;
}
Otherwise, the names can be explictly set with the @ColumnName
annotation on each constructor parameter:
public User(@ColumnName("id") int id, @ColumnName("name") String name) {
this.id = id;
this.name = name;
}
The standard Java Bean @ConstructorProperties
annotation can also be used:
@ConstructorProperties({"id", "name"})
public User(int id, String name) {
this.id = id;
this.name = name;
}
Lombok’s @AllArgsConstructor annotation generates the
@ConstructorProperties annotation for you.
|
Register a constructor mapper for your mapped class using the factory()
or of()
methods:
handle.registerRowMapper(ConstructorMapper.factory(User.class));
Set<User> userSet = handle.createQuery("SELECT * FROM \"user\" ORDER BY id ASC")
.mapTo(User.class)
.collect(Collectors.toSet());
assertThat(userSet).hasSize(4);
The constructor parameter names "id", "name" match the database column names and as such no custom mapper code is required at all.
Constructor mappers can be configured with a column name prefix for each mapped
constructor parameter. This can help to disambiguate mapping joins, e.g. when
two mapped classes have identical property names (like id
or name
):
handle.registerRowMapper(ConstructorMapper.factory(Contact.class, "c"));
handle.registerRowMapper(ConstructorMapper.factory(Phone.class, "p"));
handle.registerRowMapper(JoinRowMapper.forTypes(Contact.class, Phone.class);
List<JoinRow> contactPhones = handle.select("select " +
"c.id cid, c.name cname, " +
"p.id pid, p.name pname, p.number pnumber " +
"from contacts c left join phones p on c.id = p.contact_id")
.mapTo(JoinRow.class)
.list();
Typically, the mapped class will have a single constructor. If it has multiple constructors, Jdbi will pick one based on these rules:
-
First, use the constructor annotated with
@JdbiConstructor
, if any. -
Next, use the constructor annotated with
@ConstructorProperties
, if any. -
Otherwise, throw an exception that Jdbi doesn’t know which constructor to use.
For legacy column names that don’t match up to property names, use the
@ColumnName
annotation to provide an exact column name.
public User(@ColumnName("user_id") int id, String name) {
this.id = id;
this.name = name;
}
Nested constructor-injected types can be mapped using the @Nested
annotation:
public class User {
public User(int id,
String name,
@Nested Address address) {
...
}
}
public class Address {
public Address(String street,
String city,
String state,
String zip) {
...
}
}
handle.registerRowMapper(ConstructorMapper.factory(User.class));
List<User> users = handle
.select("select id, name, street, city, state, zip from users")
.mapTo(User.class)
.list();
The @Nested
annotation has an optional value()
attribute, which can be used
to apply a column name prefix to each nested constructor parameter:
public User(int id,
String name,
@Nested("addr") Address address) {
...
}
handle.registerRowMapper(ConstructorMapper.factory(User.class));
List<User> users = handle
.select("select id, name, addr_street, addr_city, addr_state, addr_zip from users")
.mapTo(User.class)
.list();
By default, ConstructorMapper expects the result set to contain columns to map every constructor parameter, and will throw an exception if any parameters cannot be mapped.
Parameters annotated @Nullable
may be omitted from the result set, in which
ConstructorMapper
will pass null
to the constructor for that parameter.
public class User {
public User(int id,
String name,
@Nullable String passwordHash,
@Nullable @Nested Address address) {
...
}
}
In this example, the id
and name
columns must be present in the result set,
but passwordHash
and address
are optional. If they are present, they will
be mapped. Otherwise,
Any @Nullable annotation from any package may be used.
javax.annotation.Nullable is a good choice.
|
BeanMapper
BeanMapper assigns columns to bean properties. This mapper uses the standard Java Bean conventions.
public class UserBean {
private int id;
private String name;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
The standard bean convention requires that the setter returns a void value. The popular "builder pattern" setter
convention of returning the bean itself does not work with the standard Java Bean convention and the setter will not be found. This is a common source of problems when using the BeanMapper .
|
Register a bean mapper for your mapped class, using the factory()
method:
handle.registerRowMapper(BeanMapper.factory(UserBean.class));
List<UserBean> users = handle
.createQuery("select id, \"name\" from \"user\"")
.mapTo(UserBean.class)
.list();
Alternatively, call mapToBean()
instead of registering a bean mapper:
List<UserBean> users = handle
.createQuery("select id, \"name\" from \"user\"")
.mapToBean(UserBean.class)
.list();
Or use map
with an instance:
List<UserBean> users = handle
.createQuery("select id, \"name\" from \"user\"")
.map(BeanMapper.of(UserBean.class))
.list();
Bean mappers can be configured with a column name prefix for each mapped
property. This can help to disambiguate mapping joins, e.g. when two mapped
classes have identical property names (like id
or name
):
handle.registerRowMapper(BeanMapper.factory(ContactBean.class, "c"));
handle.registerRowMapper(BeanMapper.factory(PhoneBean.class, "p"));
handle.registerRowMapper(JoinRowMapper.forTypes(ContactBean.class, PhoneBean.class));
List<JoinRow> contactPhones = handle.select("select "
+ "c.id cid, c.\"name\" cname, "
+ "p.id pid, p.\"name\" pname, p.\"number\" pnumber "
+ "from contacts c left join phones p on c.id = p.contact_id")
.mapTo(JoinRow.class)
.list();
For legacy column names that don’t match up to property names, use the
@ColumnName
annotation to provide an exact column name.
public class User {
private int id;
@ColumnName("user_id")
public int getId() { return id; }
public void setId(int id) { this.id = id; }
}
The @ColumnName
annotation can be placed on either the getter or setter
method. If conflicting annotations exist, then Annotations on the setter are preferred.
Nested Java Bean types can be mapped using the @Nested
annotation:
public class User {
private int id;
private String name;
private Address address;
... (getters and setters)
@Nested (1)
public Address getAddress() { ... }
public void setAddress(Address address) { ... }
}
public class Address {
private String street;
private String city;
private String state;
private String zip;
... (getters and setters)
}
1 | The @Nested annotation can be placed on either the getter or setter
method. The setter is preferred. |
handle.registerRowMapper(BeanMapper.factory(User.class));
List<User> users = handle
.select("select id, name, street, city, state, zip from users")
.mapTo(User.class)
.list();
The @Nested
annotation has an optional value()
attribute, which can be used
to apply a column name prefix to each nested bean property:
@Nested("addr")
public Address getAddress() { ... }
handle.registerRowMapper(BeanMapper.factory(User.class));
List<User> users = handle
.select("select id, name, addr_street, addr_city, addr_state, addr_zip from users")
.mapTo(User.class)
.list();
@Nested properties are left unmodified (i.e. null) if the result set has no
columns matching any properties of the nested object.
|
FieldMapper
FieldMapper uses reflection to map database columns directly to object fields (including private fields).
public class User {
public int id;
public String name;
}
Register a field mapper for your mapped class, using the factory()
method:
handle.registerRowMapper(FieldMapper.factory(User.class));
List<UserBean> users = handle
.createQuery("select id, name from user")
.mapTo(User.class)
.list();
Field mappers can be configured with a column name prefix for each mapped
field. This can help to disambiguate mapping joins, e.g. when two mapped
classes have identical property names (like id
or name
):
handle.registerRowMapper(FieldMapper.factory(Contact.class, "c"));
handle.registerRowMapper(FieldMapper.factory(Phone.class, "p"));
handle.registerRowMapper(JoinRowMapper.forTypes(Contact.class, Phone.class);
List<JoinRow> contactPhones = handle.select("select " +
"c.id cid, c.name cname, " +
"p.id pid, p.name pname, p.number pnumber " +
"from contacts c left join phones p on c.id = p.contact_id")
.mapTo(JoinRow.class)
.list();
For legacy column names that don’t match up to field names, use the
@ColumnName
annotation to provide an exact column name:
public class User {
@ColumnName("user_id")
public int id;
public String name;
}
Nested field-mapped types can be mapped using the @Nested
annotation:
public class User {
public int id;
public String name;
@Nested
public Address address;
}
public class Address {
public String street;
public String city;
public String state;
public String zip;
}
handle.registerRowMapper(FieldMapper.factory(User.class));
List<User> users = handle
.select("select id, name, street, city, state, zip from users")
.mapTo(User.class)
.list();
The @Nested
annotation has an optional value()
attribute, which can be used
to apply a column name prefix to each nested field:
public class User {
public int id;
public String name;
@Nested("addr")
public Address address;
}
handle.registerRowMapper(FieldMapper.factory(User.class));
List<User> users = handle
.select("select id, name, addr_street, addr_city, addr_state, addr_zip from users")
.mapTo(User.class)
.list();
@Nested fields are left unmodified (i.e. null) if the result set has no
columns matching any fields of the nested object.
|
4.5.7. Map.Entry mapping
Out of the box, Jdbi registers a RowMapper<Map.Entry<K,V>>
. Since each row in
the result set is a Map.Entry<K,V>
, the entire result set can be easily
collected into a Map<K,V>
(or Guava’s Multimap<K,V>
).
A mapper must be registered for both the key and value type. |
Join rows can be gathered into a map result by specifying the generic map signature:
CharSequence sql = "select u.id u_id, u.name u_name, p.id p_id, p.phone p_phone "
+ "from \"user\" u left join phone p on u.id = p.user_id";
Map<User, Phone> map = h.createQuery(sql)
.registerRowMapper(ConstructorMapper.factory(User.class, "u"))
.registerRowMapper(ConstructorMapper.factory(Phone.class, "p"))
.collectInto(new GenericType<Map<User, Phone>>() {});
In the preceding example, the User
mapper uses a "u" column name prefix, and
the Phone
mapper uses "p". Since each mapper only reads columns with the
expected prefix, the respective id
columns are unambiguous.
A unique index (e.g. by ID column) can be obtained by setting the key column name:
Map<Integer, User> map = h.createQuery("select * from \"user\"")
.setMapKeyColumn("id")
.registerRowMapper(ConstructorMapper.factory(User.class))
.collectInto(new GenericType<Map<Integer, User>>() {});
Set both the key and value column names to gather a two-column query into a map result:
Map<String, String> map = h.createQuery("select \"key\", \"value\" from config")
.setMapKeyColumn("key")
.setMapValueColumn("value")
.collectInto(new GenericType<Map<String, String>>() {});
All the above examples assume a one-to-one key/value relationship. What if there is a one-to-many relationship?
Google Guava provides a Multimap
type, which supports mapping multiple
values per key.
First, follow the instructions in the Google Guava section to install
GuavaPlugin
into Jdbi.
Then, simply ask for a Multimap
instead of a Map
:
String sql = "select u.id u_id, u.name u_name, p.id p_id, p.phone p_phone "
+ "from \"user\" u left join phone p on u.id = p.user_id";
Multimap<User, Phone> map = h.createQuery(sql)
.registerRowMapper(ConstructorMapper.factory(User.class, "u"))
.registerRowMapper(ConstructorMapper.factory(Phone.class, "p"))
.collectInto(new GenericType<Multimap<User, Phone>>() {});
The collectInto()
method is worth explaining. When you call it, several things
happen behind the scenes:
-
Consult the
JdbiCollectors
registry to obtain a CollectorFactory which supports the given container type. -
Next, ask that
CollectorFactory
to extract the element type from the container type signature. In the above example, the element type ofMultimap<User,Phone>
isMap.Entry<User,Phone>
. -
Obtain a mapper for that element type from the mapping registry.
-
Obtain a Collector for the container type from the
CollectorFactory
. -
Finally, return
map(elementMapper).collect(collector)
.
If the lookup for the collector factory, element type, or element mapper fails, an exception is thrown. |
Jdbi can be enhanced to support arbitrary container types. See [JdbiCollectors] for more information.
4.6. Codecs
The Codec API is still unstable and may change. |
A codec is a replacement for registering an argument and a column mapper for a type. It is responsible for serializing a typed value into a database column and creating a type from a database column.
Codecs are collected in a codec factory which can be registered with the registerCodecFactory convenience method.
// register a single codec
jdbi.registerCodecFactory(CodecFactory.forSingleCodec(type, codec));
// register a few codecs
jdbi.registerCodecFactory(CodecFactory.builder()
// register a codec by qualified type
.addCodec(QualifiedType.of(Foo.class), codec1)
// register a codec by direct java type
.addCodec(Foo.class, codec2)
// register a codec by generic type
.addCodec(new GenericType<Set<Foo>>() {}. codec3)
.build());
// register many codecs
Map<QualifiedType<?>, Codec<?>> codecs = ...
jdbi.registerCodecFactory(new CodecFactory(codecs));
Codec example:
public class Counter {
private int count = 0;
public Counter() {}
public int nextValue() {
return count++;
}
private Counter setValue(int value) {
this.count = value;
return this;
}
private int getValue() {
return count;
}
/**
* Codec to persist a counter to the database and restore it back.
*/
public static class CounterCodec implements Codec<Counter> {
@Override
public ColumnMapper<Counter> getColumnMapper() {
return (r, idx, ctx) -> new Counter().setValue(r.getInt(idx));
}
@Override
public Function<Counter, Argument> getArgumentFunction() {
return counter -> (idx, stmt, ctx) -> stmt.setInt(idx, counter.getValue());
}
}
}
JDBI core API:
// register the codec with JDBI
jdbi.registerCodecFactory(CodecFactory.forSingleCodec(COUNTER_TYPE, new CounterCodec()));
// store object
int result = jdbi.withHandle(h -> h.createUpdate("INSERT INTO counters (id, \"value\") VALUES (:id, :value)")
.bind("id", counterId)
.bindByType("value", counter, COUNTER_TYPE)
.execute());
// load object
Counter restoredCounter = jdbi.withHandle(h -> h.createQuery("SELECT \"value\" from counters where id = :id")
.bind("id", counterId)
.mapTo(COUNTER_TYPE).first());
SQL Object API uses the codecs transparently:
// SQL object dao
public interface CounterDao {
@SqlUpdate("INSERT INTO counters (id, \"value\") VALUES (:id, :value)")
int storeCounter(@Bind("id") String id, @Bind("value") Counter counter);
@SqlQuery("SELECT \"value\" from counters where id = :id")
Counter loadCounter(@Bind("id") String id);
}
// register the codec with JDBI
jdbi.registerCodecFactory(CodecFactory.forSingleCodec(COUNTER_TYPE, new CounterCodec()));
// store object
int result = jdbi.withExtension(CounterDao.class, dao -> dao.storeCounter(counterId, counter));
// load object
Counter restoredCounter = jdbi.withExtension(CounterDao.class, dao -> dao.loadCounter(counterId));
4.6.1. Resolving Types
By using the TypeResolvingCodecFactory
from the guava module, it is possible to use codecs that are registered for subclasses or interface types for concrete classes. This is necessary to e.g. map Auto Value generated classes to database columns.
In the following example, there is only a codec for Value<String>
registered, but the code uses beans and classes that are concrete implementations (StringBean
contains a StringValue
and StringValue
implements the Value<String>
interface). The TypeResolvingCodecFactory
will inspect the types to find a codec for an interface or superclass if no perfect match can be found.
// SQL object dao using concrete types
public interface DataDao {
@SqlUpdate("INSERT INTO data (id, \"value\") VALUES (:bean.id, :bean.value)")
int storeData(@BindBean("bean") StringBean bean);
@SqlUpdate("INSERT INTO data (id, \"value\") VALUES (:id, :value)")
int storeData(@Bind("id") String id, @Bind("value") StringValue data);
@SqlQuery("SELECT \"value\" from data where id = :id")
StringValue loadData(@Bind("id") String id);
}
// generic type representation
public static final QualifiedType<Value<String>> DATA_TYPE = QualifiedType.of(new GenericType<Value<String>>() {});
public static class DataCodec implements Codec<Value<String>> {
@Override
public ColumnMapper<Value<String>> getColumnMapper() {
return (r, idx, ctx) -> new StringValue(r.getString(idx));
}
@Override
public Function<Value<String>, Argument> getArgumentFunction() {
return data -> (idx, stmt, ctx) -> stmt.setString(idx, data.getValue());
}
}
// value interface
public interface Value<T> {
T getValue();
}
// bean using concrete types, not interface types.
public static class StringBean implements Bean<Value<String>> {
private final String id;
private final StringValue value;
public StringBean(String id, StringValue value) {
this.id = id;
this.value = value;
}
@Override
public String getId() {
return id;
}
@Override
public StringValue getValue() {
return value;
}
}
4.7. Templating
Binding query parameters, as described above, is excellent for sending a static set of parameters to the database engine. Binding ensures that the parameterized query string (… where foo = ?
) is transmitted to the database without allowing hostile parameter values to inject SQL.
Bound parameters are not always enough. Sometimes a query needs complicated or structural changes before being executed, and parameters just don’t cut it. Templating (using a TemplateEngine
) allows you to alter a query’s content with general String manipulations.
Typical uses for templating are optional or repeating segments (conditions and loops), complex variables such as comma-separated lists for IN clauses, and variable substitution for non-bindable SQL elements (like table names). Unlike argument binding, the rendering of attributes performed by TemplateEngines is not SQL-aware. Since they perform generic String manipulations, TemplateEngines can easily produce horribly mangled or subtly defective queries if you don’t use them carefully.
Query templating is a common attack vector! Always prefer binding parameters to static SQL over dynamic SQL when possible. |
handle.createQuery("select * from <TABLE> where name = :n")
// -> "select * from Person where name = :n"
.define("TABLE", "Person")
// -> "select * from Person where name = 'MyName'"
.bind("n", "MyName");
Use a TemplateEngine to perform crude String manipulations on a query. Query parameters should be handled by Arguments. |
TemplateEngines and SqlParsers operate sequentially: the initial String will be rendered by the TemplateEngine using attributes, then parsed by the SqlParser with Argument bindings. |
If the TemplateEngine outputs text matching the parameter format of the SqlParser, the parser will attempt to bind an Argument to it. This can be useful to e.g. have named parameters of which the name itself is also a variable, but can also cause confusing bugs:
String paramName = "arg";
handle.createQuery("select * from Foo where bar = :<attr>")
.define("attr", paramName)
...
.bind(paramName, "baz"); // <- does not need to know the parameter's name ("arg")!
handle.createQuery("select * from emotion where emoticon = <sticker>")
.define("sticker", ":-)") // -> "... where emoticon = :-)"
.mapToMap()
// exception: no binding/argument named "-)" present
.list();
Bindings and definitions are usually separate. You can link them in a limited manner
using the stmt.defineNamedBindings()
or @DefineNamedBindings
customizers.
For each bound parameter (including bean properties), this will define a boolean which is true
if the
binding is present and not null
. You can use this to
craft conditional updates and query clauses.
For example,
class MyBean {
long id();
String getA();
String getB();
Instant getModified();
}
handle.createUpdate("update mybeans set <if(a)>a = :a,<endif> <if(b)>b = :b,<endif> modified=now() where id=:id")
.bindBean(mybean)
.defineNamedBindings()
.execute();
Also see the section about TemplateEngine.
4.7.1. ClasspathSqlLocator
You may find it helpful to store your SQL templates in individual files on the classpath, rather than in string inside Java code.
The ClasspathSqlLocator
converts Java type and method names into classpath locations,
and then reads, parses, and caches the loaded statements.
// reads classpath resource com/foo/BarDao/query.sql
ClasspathSqlLocator.create().locate(com.foo.BarDao.class, "query");
// same resource as above
ClasspathSqlLocator.create().locate("com.foo.BarDao.query");
By default, any comments in the loaded file are left untouched. Comments can be stripped out by
instantiating the ClasspathSqlLocator
with the removingComments()
method:
// reads classpath resource com/foo/BarDao/query.sql, stripping all comments
ClasspathSqlLocator.removingComments().locate(com.foo.BarDao.class, "query");
// same resource as above
ClasspathSqlLocator.removingComments().locate("com.foo.BarDao.query");
Multiple comment styles are supported:
-
C-style (
/* … */
and//
to the end of the line) -
SQL style (
--
to the end of the line) -
shell style (
#
to the end of the line; except when followed immediately by the>
character; this is required for the Postgres#>
and#>>
operators).
4.8. SQL Arrays
Jdbi can bind/map Java arrays to/from SQL arrays:
handle.createUpdate("insert into groups (id, user_ids) values (:id, :userIds)")
.bind("id", 1)
.bind("userIds", new int[] { 10, 5, 70 })
.execute();
int[] userIds = handle.createQuery("select user_ids from groups where id = :id")
.bind("id", 1)
.mapTo(int[].class)
.one();
You can also use Collections in place of arrays, but you’ll need to provide the element type if you’re using the fluent API, since it’s erased:
handle.createUpdate("insert into groups (id, user_ids) values (:id, :userIds)")
.bind("id", 1)
.bindArray("userIds", int.class, Arrays.asList(10, 5, 70))
.execute();
List<Integer> userIds = handle.createQuery("select user_ids from groups where id = :id")
.bind("id", 1)
.mapTo(new GenericType<List<Integer>>() {})
.one();
Use @SingleValue
for mapping an array result with the SqlObject API:
public interface GroupsDao {
@SqlQuery("select user_ids from groups where id = ?")
@SingleValue
List<Integer> getUserIds(int groupId);
}
4.8.1. Registering array types
Any Java array element type you want binding support for needs to be registered
with Jdbi’s SqlArrayTypes
registry. An array type that is directly supported
by your JDBC driver can be registered using:
jdbi.registerArrayType(int.class, "integer");
Here, "integer"
is the SQL type name that the JDBC driver supports natively.
Plugins like PostgresPlugin and H2DatabasePlugin automatically
register the most common array element types for their respective databases.
|
Postgres supports enum array types, so you can register an array type for
enum Colors { red, blue } using jdbi.registerArrayType(Colors.class, "colors")
where "colors" is a user-defined enum type name in your database.
|
4.8.2. Binding custom array types
You can also provide your own implementation of SqlArrayType
that converts
a custom Java element type to a type supported by the JDBC driver:
class UserArrayType implements SqlArrayType<User> {
@Override
public String getTypeName() {
return "integer";
}
@Override
public Object convertArrayElement(User user) {
return user.getId();
}
}
You can now bind instances of User[]
to arguments of data type integer[]
:
User user1 = new User(1, "bob")
User user2 = new User(42, "alice")
handle.registerArrayType(new UserArrayType());
handle.createUpdate("insert into groups (id, user_ids) values (:id, :users)")
.bind("id", 1)
.bind("users", new User[] { user1, user2 })
.execute();
Like the Arguments Registry, if there are multiple SqlArrayType s
registered for the same data type, the last registered wins.
|
4.8.3. Mapping array types
SqlArrayType
only allows you to bind Java array/collection arguments to their
SQL counterparts. To map SQL array columns back to Java types, you can register
a regular ColumnMapper
:
public class UserIdColumnMapper implements ColumnMapper<UserId> {
@Override
public UserId map(ResultSet rs, int col, StatementContext ctx) throws SQLException {
return new UserId(rs.getInt(col));
}
}
handle.registerColumnMapper(new UserIdColumnMapper());
List<UserId> userIds = handle.createQuery("select user_ids from groups where id = :id")
.bind("id", 1)
.mapTo(new GenericType<List<UserId>>() {})
.one();
Array columns can be mapped to any container type registered with the
JdbiCollectors registry. E.g. a VARCHAR[] may be mapped to an
ImmutableList<String> if the guava plugin is installed.
|
4.9. Results
After executing a database query, you need to interpret the results. JDBC provides the ResultSet class which can do simple mapping to Java primitives and built in classes, but the API is often cumbersome to use. Jdbi provides configurable mapping, including the ability to register custom mappers for rows and columns.
A RowMapper converts a row of a ResultSet into a result object.
A ColumnMapper converts a single column’s value into a Java object. It can be used as a RowMapper if there is only one column present, or it can be used to build more complex RowMapper types.
The mapper is selected based on the declared result type of your query.
jdbi iterates over the rows in the ResultSet and presents the mapped results to you in a container such as a List, Stream, Optional, or Iterator.
public static class User {
final int id;
final String name;
public User(int id, String name) {
this.id = id;
this.name = name;
}
}
@BeforeEach
public void setUp() {
handle = h2Extension.getSharedHandle();
handle.execute("CREATE TABLE \"user\" (id INTEGER PRIMARY KEY AUTO_INCREMENT, \"name\" VARCHAR)");
for (String name : Arrays.asList("Alice", "Bob", "Charlie", "Data")) {
handle.execute("INSERT INTO \"user\" (\"name\") VALUES (?)", name);
}
}
@Test
public void findBob() {
User u = findUserById(2).orElseThrow(() -> new AssertionError("No user found"));
assertThat(u.id).isEqualTo(2);
assertThat(u.name).isEqualTo("Bob");
}
public Optional<User> findUserById(long id) {
RowMapper<User> userMapper =
(rs, ctx) -> new User(rs.getInt("id"), rs.getString("name"));
return handle.createQuery("SELECT * FROM \"user\" WHERE id=:id")
.bind("id", id)
.map(userMapper)
.findFirst();
}
When Jdbi consumes all rows to make the result, like a List<T>
or single result,
the underlying ResultSet
will be closed before returning the result. Otherwise,
such as with a Stream<T>
or a Iterable<T>
result, you are responsible to close()
the
result object, which closes the ResultSet
. Prefer automatic resource management
like use
or with
methods, or use try-with-resources
. Generally, ending the transaction,
or closing the Handle (Connection
) will release all associated result sets.
Attempting to iterate a closed result set is an error.
4.9.1. ResultBearing
The ResultBearing interface represents a result set of a database operation, which has not been mapped to any particular result type.
TODO:
-
Query implements ResultBearing
-
Update.executeAndReturnGeneratedKeys() returns ResultBearing
-
PreparedBatch.executeAndReturnGeneratedKeys() returns ResultBearing
-
A ResultBearing object can be mapped, which returns a ResultIterable of the mapped type.
-
mapTo(Type | Class | GenericType) if a mapper is registered for type
-
map(RowMapper | ColumnMapper)
-
mapToBean() for bean types
-
mapToMap() which returns Map<String,Object> mapping lower-cased column names to values
-
-
reduceRows
-
RowView
-
-
reduceResultSet
-
collectInto e.g. with a GenericType token. Implies a mapTo() and a collect() in one operation. e.g. collectInto(new GenericType<List<User>>(){}) is the same as mapTo(User.class).collect(toList())
-
Provide list of container types supported out of the box
4.9.2. ResultIterable
ResultIterable represents a
result set which has been mapped to a specific type, e.g.
ResultIterable<User>
.
TODO:
-
ResultIterable.forEach
-
ResultIterable.iterator()
-
Must be explicitly closed, to release database resources.
-
Use try-with-resources to ensure database resources get cleaned up. *
-
Find a Single Result
ResultIterable.one()
returns the only row in the result set. If zero or
multiple rows are encountered, it will throw IllegalStateException
.
ResultIterable.findOne()
returns an Optional<T>
of the only row in the
result set, or Optional.empty()
if no rows are returned.
ResultIterable.first()
returns the first row in the result set. If zero
rows are encountered, IllegalStateException
is thrown.
ResultIterable.findFirst()
returns an Optional<T>
of the first row, if
any.
Stream
Stream integration allows you to use a RowMapper to adapt a ResultSet into the new Java 8 Streams framework. As long as your database supports streaming results (for example, PostgreSQL will do it as long as you are in a transaction and set a fetch size), the stream will lazily fetch rows from the database as necessary.
#stream returns a Stream<T>. You should then process the stream and produce a result. This stream must be closed to release any database resources held, so we recommend useStream, withStream or alternately a try-with-resources block to ensure that no resources are leaked.
handle.createQuery("SELECT id, name FROM user ORDER BY id ASC")
.map(new UserMapper())
.useStream(stream -> {
Optional<String> first = stream
.filter(u -> u.id > 2)
.map(u -> u.name)
.findFirst();
assertThat(first).contains("Charlie");
});
#withStream and #useStream handle closing the stream for you. You provide a StreamCallback that produces a result or a StreamConsumer that produces no result, respectively.
List
#list emits a List<T>. This necessarily buffers all results in memory.
List<User> users =
handle.createQuery("SELECT id, name FROM user")
.map(new UserMapper())
.list();
Collectors
#collect takes a Collector<T, ? , R> that builds a resulting collection R<T>. The java.util.stream.Collectors class has a number of interesting Collector implementations to start with.
You can also write your own custom collectors. For example, to accumulate found rows into a Map:
h.execute("insert into something (id, name) values (1, 'Alice'), (2, 'Bob'), (3, 'Chuckles')");
Map<Integer, Something> users = h.createQuery("select id, name from something")
.mapTo(Something.class)
.collect(Collector.of(HashMap::new, (accum, item) -> {
accum.put(item.getId(), item); // Each entry is added into an accumulator map
}, (l, r) -> {
l.putAll(r); // While jdbi does not process rows in parallel,
return l; // the Collector contract encourages writing combiners.
}, Characteristics.IDENTITY_FINISH));
Reduction
#reduce provides a simplified Stream#reduce. Given an identity starting value and a BiFunction<U, T, U> it will repeatedly combine U until only a single remains, and then return that.
ResultSetScanner
The ResultSetScanner interface accepts a lazily-provided ResultSet and produces the result Jdbi returns from statement execution.
Most of the above operations are implemented in terms of ResultSetScanner. The Scanner has ownership of the ResultSet and may advance or seek it.
The return value ends up being the final result of statement execution.
Most users should prefer using the higher level result collectors described above, but someone’s gotta do the dirty work.
4.9.3. Joins
Joining multiple tables together is a very common database task. It is also where the mismatch between the relational model and Java’s object model starts to rear its ugly head.
Here we present a couple of strategies for retrieving results from more complicated rows.
Consider a contact list app as an example. The contact list contains any number of contacts. Contacts have a name, and any number of phone numbers. Phone numbers have a type (e.g. home, work) and a phone number:
class Contact {
Long id;
String name;
List<Phone> phones = new ArrayList<>();
void addPhone(Phone phone) {
phones.add(phone);
}
}
class Phone {
Long id;
String type;
String phone;
}
We’ve left out getters, setters, and access modifiers for brevity.
Since we’ll be reusing the same queries, we’ll define them as constants now:
static final String SELECT_ALL = "select contacts.id c_id, name c_name, "
+ "phones.id p_id, type p_type, phones.phone p_phone "
+ "from contacts left join phones on contacts.id = phones.contact_id "
+ "order by c_name, p_type ";
static final String SELECT_ONE = SELECT_ALL + "where phones.id = :id";
Note that we’ve given aliases (e.g. c_id
, p_id
) to distinguish columns of
the same name (id
) from different tables.
Jdbi provides a few different APIs for dealing with joined data.
ResultBearing.reduceRows()
The
"ResultBearing.reduceRows(U, BiFunction)"
method accepts an accumulator seed value and a lambda function. For each row in
the result set, Jdbi calls the lambda with the current accumulator value and a
RowView over the current row of the
result set. The value returned for each row becomes the input accumulator
passed in for the next row. After the last row has been processed,
reducedRows()
returns the last value returned from the lambda.
List<Contact> contacts = handle.createQuery(SELECT_ALL)
.registerRowMapper(BeanMapper.factory(Contact.class, "c"))
.registerRowMapper(BeanMapper.factory(Phone.class, "p")) (1)
.reduceRows(new LinkedHashMap<Long, Contact>(), (2)
(map, rowView) -> {
Contact contact = map.computeIfAbsent( (3)
rowView.getColumn("c_id", Long.class),
id -> rowView.getRow(Contact.class));
if (rowView.getColumn("p_id", Long.class) != null) { (4)
contact.addPhone(rowView.getRow(Phone.class));
}
return map; (5)
})
.values() (6)
.stream()
.collect(toList()); (7)
1 | Register row mappers for Contact and Phone . Note the "c" and "p"
arguments used—these are column name prefixes. By registering mappers with
prefixes, the Contact mapper will only map the c_id and c_name
columns, whereas the Phone mapper will only map p_id , p_type , and
p_phone . |
2 | Use an empty LinkedHashMap
as the accumulator seed, mapped by contact ID. LinkedHashMap is a good
accumulator when selecting multiple master records, since it has fast
storage and lookup while preserving insertion order (which helps honor
ORDER BY clauses). If ordering is unimportant, a HashMap would also
suffice. |
3 | Load the Contact from the accumulator if we already have it; otherwise,
initialize it through the RowView . |
4 | If p_id column is not null, load the phone number from the current row
and add it to the current contact. |
5 | Return the input map (now sporting an additional contact and/or phone) as the accumulator for the next row. |
6 | At this point, all rows have been read into memory, and we don’t need the
contact ID keys. So we call Map.values() to get a Collection<Contact> . |
7 | Collect the contacts into a List<Contact> . |
Alternatively, the ResultBearing.reduceRows(RowReducer) variant accepts a RowReducer and returns a stream of reduced elements.
For simple master-detail joins, the ResultBearing.reduceRows(BiConsumer<Map<K,V>,RowView>) method makes it easy to reduce these joins into a stream of master elements.
Adapting the example above:
List<Contact> contacts = handle.createQuery(SELECT_ALL)
.registerRowMapper(BeanMapper.factory(Contact.class, "c"))
.registerRowMapper(BeanMapper.factory(Phone.class, "p"))
.reduceRows((Map<Long, Contact> map, RowView rowView) -> { (1)
Contact contact = map.computeIfAbsent(
rowView.getColumn("c_id", Long.class),
id -> rowView.getRow(Contact.class));
if (rowView.getColumn("p_id", Long.class) != null) {
contact.addPhone(rowView.getRow(Phone.class));
}
(2)
})
.collect(toList()); (3)
1 | The lambda receives a map where result objects will be stored, and a
RowView . The map is a LinkedHashMap , so the result stream will
yield the result objects in the same order they were inserted. |
2 | No return statement needed. The same map is reused on every row. |
3 | This reduceRows() invocation produces a Stream<Contact> (i.e. from
map.values().stream() . In this example, we collect the elements into a
list, but we could call any Stream method here. |
You may be wondering about the getRow()
and getColumn()
calls to rowView
.
When you call rowView.getRow(SomeType.class)
, RowView
looks up the
registered row mapper for SomeType
, and uses it to map the current row to a
SomeType
object.
Likewise, when you call rowView.getColumn("my_value", MyValueType.class)
,
RowView
looks up the registered column mapper for MyValueType
, and uses it
to map the my_value
column of the current row to a MyValueType
object.
Now let’s do the same thing, but for a single contact:
Optional<Contact> contact = handle.createQuery(SELECT_ONE)
.bind("id", contactId)
.registerRowMapper(BeanMapper.factory(Contact.class, "c"))
.registerRowMapper(BeanMapper.factory(Phone.class, "p"))
.reduceRows(LinkedHashMapRowReducer.<Long, Contact> of((map, rowView) -> {
Contact contact = map.orElseGet(() -> rowView.getRow(Contact.class));
if (rowView.getColumn("p_id", Long.class) != null) {
contact.addPhone(rowView.getRow(Phone.class));
}
})
.findFirst();
ResultBearing.reduceResultSet()
ResultBearing.reduceResultSet()
is a low-level API similar to reduceRows()
, except it provides direct access
to the JDBC ResultSet
instead of a RowView
for each row.
This method can provide superior performance compared to reduceRows()
, at the
expense of verbosity:
List<Contact> contacts = handle.createQuery(SELECT_ALL)
.reduceResultSet(new LinkedHashMap<Long, Contact>(),
(acc, resultSet, ctx) -> {
long contactId = resultSet.getLong("c_id");
Contact contact;
if (acc.containsKey(contactId)) {
contact = acc.get(contactId);
} else {
contact = new Contact();
acc.put(contactId,contact);
contact.setId(contactId);
contact.setName(resultSet.getString("c_name");
}
long phoneId = resultSet.getLong("p_id");
if (!resultSet.wasNull()) {
Phone phone = new Phone();
phone.setId(phoneId);
phone.setType(resultSet.getString("p_type");
phone.setPhone(resultSet.getString("p_phone");
contact.addPhone(phone);
}
return acc;
})
.values()
.stream()
.collect(toList());
JoinRowMapper
The JoinRowMapper takes a set of types to extract from each row. It uses the mapping registry to determine how to map each given type, and presents you with a JoinRow that holds all of the resulting values.
Let’s consider two simple types, User and Article, with a join table named Author. Guava provides a Multimap class which is very handy for representing joined tables like this. Assuming we have mappers already registered:
h.registerRowMapper(ConstructorMapper.factory(User.class));
h.registerRowMapper(ConstructorMapper.factory(Article.class));
we can then easily populate a Multimap with the mapping from the database:
Multimap<User, Article> joined = HashMultimap.create();
h.createQuery("SELECT * FROM \"user\" NATURAL JOIN author NATURAL JOIN article")
.map(JoinRowMapper.forTypes(User.class, Article.class))
.forEach(jr -> joined.put(jr.get(User.class), jr.get(Article.class)));
While this approach is easy to read and write, it can be inefficient for certain patterns of data. Consider performance requirements when deciding whether to use high level mapping or more direct low level access with handwritten mappers. |
You can also use it with SqlObject:
public interface UserArticleDao {
@RegisterJoinRowMapper({User.class, Article.class})
@SqlQuery("SELECT * FROM \"user\" NATURAL JOIN author NATURAL JOIN article")
Stream<JoinRow> getAuthorship();
}
Multimap<User, Article> joined = HashMultimap.create();
handle.attach(UserArticleDao.class)
.getAuthorship()
.forEach(jr -> joined.put(jr.get(User.class), jr.get(Article.class)));
assertThat(joined).isEqualTo(JoinRowMapperTest.getExpected());
4.10. Updates
Updates are operations that return an integer number of rows modified, such as a database INSERT, UPDATE, or DELETE.
You can execute a simple update with Handle
's int execute(String sql, Object… args)
method which binds simple positional parameters.
count = handle.execute("INSERT INTO \"user\" (id, \"name\") VALUES(?, ?)", 4, "Alice");
assertThat(count).isOne();
To further customize, use createUpdate
:
int count = handle.createUpdate("INSERT INTO \"user\" (id, \"name\") VALUES(:id, :name)")
.bind("id", 3)
.bind("name", "Charlie")
.execute();
assertThat(count).isOne();
Updates may return Generated Keys instead of a result count.
4.11. Batches
A Batch sends many commands to the server in bulk.
After opening the batch, repeated add statements, and invoke add.
Batch batch = handle.createBatch();
batch.add("INSERT INTO fruit VALUES(0, 'apple')");
batch.add("INSERT INTO fruit VALUES(1, 'banana')");
int[] rowsModified = batch.execute();
The statements are sent to the database in bulk, but each statement is executed separately.
There are no parameters. Each statement returns a modification count, as with an Update, and
those counts are then returned in an int[]
array. In common cases all elements will be 1
.
Some database drivers might return special values in conditions where modification counts are not available. See the executeBatch documentation for details.
4.12. Prepared Batches
A PreparedBatch sends one statement to the server with many argument sets. The statement is executed repeatedly, once for each batch of arguments that is add-ed to it.
The result is again a int[]
of modified row count.
PreparedBatch batch = handle.prepareBatch("INSERT INTO \"user\" (id, \"name\") VALUES(:id, :name)");
for (int i = 100; i < 5000; i++) {
batch.bind("id", i).bind("name", "User:" + i).add();
}
int[] counts = batch.execute();
SqlObject also supports batch inserts:
public void testSqlObjectBatch() {
BasketOfFruit basket = handle.attach(BasketOfFruit.class);
int[] rowsModified = basket.fillBasket(Arrays.asList(
new Fruit(0, "apple"),
new Fruit(1, "banana")));
assertThat(rowsModified).containsExactly(1, 1);
assertThat(basket.countFruit()).isEqualTo(2);
}
public interface BasketOfFruit {
@SqlBatch("INSERT INTO fruit VALUES(:id, :name)")
int[] fillBasket(@BindBean Collection<Fruit> fruits);
@SqlQuery("SELECT count(1) FROM fruit")
int countFruit();
}
Batching dramatically increases efficiency over repeated single statement execution, but many databases don’t handle extremely large batches well either. Test with your database configuration, but often extremely large data sets should be divided and committed in pieces - or risk bringing your database to its knees. |
4.12.1. Exception Rewriting
The JDBC SQLException class is very old and predates more modern exception facilities like Throwable’s suppressed exceptions. When a batch fails, there may be multiple failures to report, which could not be represented by the base Exception types of the day.
So SQLException has a bespoke getNextException chain to represent the causes of a batch failure. Unfortunately, by default most logging libraries do not print these exceptions out, pushing their handling into your code. It is very common to forget to handle this situation and end up with logs that say nothing other than
java.sql.BatchUpdateException: Batch entry 1 insert into something (id, name) values (0, '') was aborted. Call getNextException to see the cause.
jdbi will attempt to rewrite such nextExceptions into "suppressed exceptions" (new in Java 8) so that your logs are more helpful:
java.sql.BatchUpdateException: Batch entry 1 insert into something (id, name) values (0, 'Keith') was aborted. Call getNextException to see the cause.
Suppressed: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "something_pkey"
Detail: Key (id)=(0) already exists.
4.13. Generated Keys
An Update or PreparedBatch may automatically generate keys. These keys are treated separately from normal results. Depending on your database and configuration, the entire inserted row may be available.
Unfortunately there is a lot of variation between databases supporting this feature so please test this feature’s interaction with your database thoroughly. |
In PostgreSQL, the entire row is available, so you can immediately map your inserted names back to full User objects! This avoids the overhead of separately querying after the insert completes.
Consider the following table:
public static class User {
final int id;
final String name;
public User(int id, String name) {
this.id = id;
this.name = name;
}
}
@BeforeEach
public void setUp() {
db = pgExtension.getJdbi();
db.useHandle(h -> h.execute("CREATE TABLE users (id SERIAL PRIMARY KEY, name VARCHAR)"));
db.registerRowMapper(ConstructorMapper.factory(User.class));
}
You can get generated keys in the fluent style:
public void fluentInsertKeys() {
db.useHandle(handle -> {
User data = handle.createUpdate("INSERT INTO users (name) VALUES(?)")
.bind(0, "Data")
.executeAndReturnGeneratedKeys()
.mapTo(User.class)
.one();
assertEquals(1, data.id); // This value is generated by the database
assertEquals("Data", data.name);
});
}
4.14. Stored Procedure Calls
A Call invokes a database stored procedure.
Let’s assume an existing stored procedure as an example:
CREATE FUNCTION add(a IN INT, b IN INT, sum OUT INT) AS $$
BEGIN
sum := a + b;
END;
$$ LANGUAGE plpgsql
Here’s how to call a stored procedure:
OutParameters result = handle
.createCall("{:sum = call add(:a, :b)}") (1)
.bind("a", 13) (2)
.bind("b", 9) (2)
.registerOutParameter("sum", Types.INTEGER) (3) (4)
.invoke(); (5)
1 | Call Handle.createCall() with the SQL statement. Note that JDBC has a
peculiar SQL format when calling stored procedures, which we must follow. |
2 | Bind input parameters to the procedure call. |
3 | Register out parameters, the values that will be returned from the stored procedure call. This tells JDBC what data type to expect for each out parameter. |
4 | Out parameters may be registered by name (as shown in the example) or by zero-based index, if the SQL is using positional parameters. Multiple output parameters may be registered, depending on the output of the stored procedure itself. |
5 | Finally, call invoke() to execute the procedure. |
Invoking the stored procedure returns an OutParameters object, which contains the value(s) returned from the stored procedure call.
Now we can extract the result(s) from OutParameters
:
int sum = result.getInt("sum");
It is possible to return open cursors as a result-like object by declaring it
as Types.REF_CURSOR
and then inspecting it via OutParameters.getRowSet()
.
Usually this must be done in a transaction, and the results must be consumed
before closing the statement by processing it using the Call.invoke(Consumer)
or Call.invoke(Function)
callback style.
Due to design constraints within JDBC, the parameter data types available
through OutParameters is limited to those types supported directly by JDBC.
This cannot be expanded through e.g. mapper registration.
|
4.15. Scripts
A Script parses a String into semicolon terminated statements. The statements can be executed in a single Batch or individually.
int[] results = handle.createScript(
"INSERT INTO \"user\" VALUES(3, 'Charlie');"
+ "UPDATE \"user\" SET \"name\"='Bobby Tables' WHERE id=2;")
.execute();
assertThat(results).containsExactly(1, 1);
4.16. Transactions
jdbi provides full support for JDBC transactions.
Handle objects provide two ways to open a transaction — inTransaction allows you to return a result, and useTransaction has no return value.
Both optionally allow you to specify the transaction isolation level.
public Optional<User> findUserById(long id) {
return handle.inTransaction(h ->
h.createQuery("SELECT * FROM users WHERE id=:id")
.bind("id", id)
.mapTo(User.class)
.findFirst());
}
Here, we (probably unnecessarily) guard a simple SELECT statement with a transaction.
Additionally, Handle has a number of methods for direct transaction management: begin(), savepoint(), rollback(), commit(), etc. Normally, you will not need to use these. If you do not explicitly commit a manually opened transaction, it will be rolled back.
4.16.1. Serializable Transactions
For more advanced queries, sometimes serializable transactions are required. jdbi includes a transaction runner that is able to retry transactions that abort due to serialization failures. It is important that your transaction does not have side effects as it may be executed multiple times.
// Automatically rerun transactions
db.setTransactionHandler(new SerializableTransactionRunner());
// Set up some values
BiConsumer<Handle, Integer> insert = (h, i) -> h.execute("INSERT INTO ints(value) VALUES(?)", i);
handle.execute("CREATE TABLE ints (value INTEGER)");
insert.accept(handle, 10);
insert.accept(handle, 20);
// Run the following twice in parallel, and synchronize
ExecutorService executor = Executors.newCachedThreadPool();
CountDownLatch latch = new CountDownLatch(2);
Callable<Integer> sumAndInsert = () ->
db.inTransaction(TransactionIsolationLevel.SERIALIZABLE, h -> {
// Both read initial state of table
int sum = h.select("SELECT sum(value) FROM ints").mapTo(int.class).one();
// First time through, make sure neither transaction writes until both have read
latch.countDown();
latch.await();
// Now do the write.
insert.accept(h, sum);
return sum;
});
// Both of these would calculate 10 + 20 = 30, but that violates serialization!
Future<Integer> result1 = executor.submit(sumAndInsert);
Future<Integer> result2 = executor.submit(sumAndInsert);
// One of the transactions gets 30, the other will abort and automatically rerun.
// On the second attempt it will compute 10 + 20 + 30 = 60, seeing the update from its sibling.
// This assertion fails under any isolation level below SERIALIZABLE!
assertThat(result1.get() + result2.get()).isEqualTo(30 + 60);
executor.shutdown();
The above test is designed to run two transactions in lock step. Each attempts to read the sum of all rows in the table, and then insert a new row with that sum. We seed the table with the values 10 and 20.
Without serializable isolation, each transaction reads 10 and 20, and then returns 30. The end result is 30 + 30 = 60, which does not correspond to any serial execution of the transactions!
With serializable isolation, one of the two transactions is forced to abort and retry. On the second go around, it calculates 10 + 20 + 30 = 60. Adding to 30 from the other, we get 30 + 60 = 90 and the assertion succeeds.
4.17. Metadata
Jdbi allows access to the Database Metadata through queryMetadata
methods on the Handle.
Simple values can be queried directly using a method reference:
String url = h.queryMetadata(DatabaseMetaData::getURL);
boolean supportsTransactions = h.queryMetadata(DatabaseMetaData::supportsTransactions);
Many methods on the DatabaseMetaData return a ResultSet. These can be used with the queryMetadata
method that returns a ResultBearing.
All JDBI Row Mappers and Column Mappers are available to map these results:
List<String> catalogNames = h.queryMetadata(DatabaseMetaData::getCatalogs)
.mapTo(String.class)
.list();
5. Configuration
Jdbi
aims to be useful out of the box with minimal configuration. Sometimes
you need to change default behavior, or add in extensions to handle additional
database types. Each piece of core or extension that wishes to participate in
configuration defines a configuration class, for example the SqlStatements
class stores SqlStatement related configuration. Then, on any Configurable
context
(like a Jdbi
or Handle
) you can change configuration in a type safe way:
jdbi.getConfig(SqlStatements.class).setUnusedBindingAllowed(true);
jdbi.getConfig(Arguments.class).register(new MyTypeArgumentFactory());
jdbi.getConfig(Handles.class).setForceEndTransactions(true);
// Or, if you have a bunch of work to do:
jdbi.configure(RowMappers.class, rm -> {
rm.register(new TypeARowMapperFactory();
rm.register(new TypeBRowMapperFactory();
});
Generally, you should finalize all configuration changes before interacting with the database.
When a new context is created, it inherits a copy of the parent context
configuration at the time of creation. So a Handle
initializes its
configuration from the creating Jdbi
, but changes never propagate back up.
See JdbiConfig for more advanced implementation details.
5.1. Qualified Types
Sometimes the same Java object can correspond to multiple data types in a database. For example,
a String
could be varchar
plaintext, nvarchar
text, json
data, etc, all with different handling requirements.
QualifiedType allows you to add such context to a Java type:
QualifiedType.of(String.class).with(Json.class);
This QualifiedType
still represents the String
type, but qualified with the @Json
annotation.
It can be used in a way similar to GenericType, to make components
handling values (mainly ArgumentFactories
and ColumnMapperFactories
) perform their work differently,
and to have the values handled by different implementations altogether:
@Json
public class JsonArgumentFactory extends AbstractArgumentFactory<String> {
@Override
protected Argument build(String value, ConfigRegistry config) {
// do something specifically for json data
}
}
Once registered, this @Json
qualified factory will receive only @Json String
values.
Other factories not qualified as such will not receive this value:
QualifiedType<String> json = QualifiedType.of(String.class).with(Json.class);
query.bindByType("jsonValue", "{\"foo\":1}", json);
Jdbi chooses factories to handle values by exactly matching their qualifiers. It’s up to the factory implementations to discriminate on the type of the value afterwards. |
Qualifiers are implemented as Annotations . This allows factories to independently inspect values for qualifiers at the source,
such as on their Class , to alter their own behavior or to requalify a value
and have it re-evaluated by Jdbi’s lookup chain.
|
Qualifiers being annotations does not mean they inherently activate their function when placed in source classes. Each feature decides its own rules regarding their use. |
Arguments can only be qualified for binding via bindByType calls, not regular
bind or update.execute(Object…) . Also, arrays cannot be qualified.
|
These features currently make use of qualified types:
-
@NVarchar
and@MacAddr
(the latter injdbi3-postgres
) bind and map Strings asnvarchar
andmacaddr
respectively, instead of the usualvarchar
. -
jdbi3-postgres
offers HStore. -
BeanMapper
,@BindBean
,@RegisterBeanMapper
,mapTobean()
, andbindBean()
respect qualifiers on getters, setters, and setter parameters. -
ConstructorMapper
and@RegisterConstructorMapper
respect qualifiers on constructor parameters. -
@BindMethods
andbindMethods()
respect qualifiers on methods. -
@BindFields
,@RegisterFieldMapper
,FieldMapper
andbindFields()
respect qualifiers on fields. -
SqlObject
respects qualifiers on methods (applies them to the return type) and parameters.-
on parameters of type
Consumer<T>
, qualifiers are applied to theT
.
-
-
@MapTo
-
@BindJpa
andJpaMapper
respect qualifiers on getters and setters. -
@BindKotlin
,bindKotlin()
, andKotlinMapper
respect qualifiers on constructor parameters, getters, setters, and setter parameters.
6. SQL Objects
SQL Objects are a declarative-style alternative to the fluent-style Core API.
To start using the SQL Object plugin, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-sqlobject</artifactId>
</dependency>
Then install the plugin into your Jdbi
instance:
Jdbi jdbi = ...
jdbi.installPlugin(new SqlObjectPlugin());
With SQL Object, you declare a public interface, add methods for each database operation, and specify what SQL statement to execute.
You can specify what each method does in one of two ways:
-
Annotate the method with a SQL method annotation. Jdbi provides four of these annotations out of the box (updates, queries, stored procedure calls, and batches).
-
Declare the method as a Java 8
default
method, and provide your own implementation in the method body.
At runtime, you can request an instance of your interface, and Jdbi synthesizes an implementation based on the annotations and methods you declared.
6.1. Annotated Methods
Methods annotated with one of Jdbi’s SQL method annotations (@SqlBatch, @SqlCall, @SqlQuery, or @SqlUpdate) have their implementation generated automatically, based on the annotations on the method, and its parameters.
The parameters to the method are used as arguments to the statement, and the SQL statement result mapped into the method return type.
6.1.1. @SqlUpdate
Use the @SqlUpdate
annotation for operations that modify data (i.e. inserts,
updates, deletes).
public interface UserDao {
@SqlUpdate("insert into users (id, name) values (?, ?)")
void insert(long id, String name);
}
Method arguments are bound to the ?
token in the SQL statement at their
respective positions. So id
is bound to the first ?
, and name
to the
second.
@SqlUpdate can also be used for DDL (Data Definition Language) operations like
creating or altering tables. We recommend using a schema migration tool such
as Flyway or
Liquibase to maintain your database schemas.
|
By default, a @SqlUpdate
method may return a handful of types:
-
void
: returns nothing (obviously) -
int
orlong
: returns the update count. Depending on the database vendor and JDBC driver, this may be either the number of rows changed, or the number matched by the query (regardless of whether any data was changed). -
boolean
: returns true if the update count is greater than zero.
@GetGeneratedKeys
Some SQL statements will cause data to be generated on your behalf at the database, e.g. a table with an auto-generated primary key, or a primary key selected from a sequence. We need a way to retrieve these generated values back from the database.
Databases vary in support for generated keys. Some support only one generated key column per statement, and some (such as Postgres) can return the entire row. You should check your database vendor’s documentation before relying on this behavior. |
The @GetGeneratedKeys
annotation tells Jdbi that the return value should be
generated key from the SQL statement, instead of the update count.
public interface UserDao {
@SqlUpdate("insert into users (id, name) values (nextval('user_seq'), ?)")
@GetGeneratedKeys("id")
long insert(String name);
}
Multiple columns may be generated and returned in this way:
public interface UserDao {
@SqlUpdate("insert into users (id, name, created_on) values (nextval('user_seq'), ?, now())")
@GetGeneratedKeys({"id", "created_on"})
@RegisterBeanMapper(IdCreateTime.class)
IdCreateTime insert(String name);
}
The One True Database supports additional functionality when returning generated keys. See PostgreSQL for more details. |
6.1.2. Binding Arguments
Before we go on with the other @Sql___
annotations, let’s discuss how method
arguments are bound as parameters to the SQL statement.
By default, arguments passed to the method are bound as positional parameters in the SQL statement.
public interface UserDao {
@SqlUpdate("insert into users (id, name) values (?, ?)")
void insert(long id, String name);
}
You can use named arguments with the @Bind
annotation:
@SqlUpdate("insert into users (id, name) values (:id, :name)")
void insert(@Bind("id") long id, @Bind("name") String name);
Compiling with parameter names removes the
need for the @Bind
annotation. Jdbi will then bind each un-annotated parameter
to the parameter’s name.
@SqlUpdate("insert into users (id, name) values (:id, :name)")
void insert(long id, String name);
Binding a list of values is done through the @BindList
annotation. This will expand the list in a 'a,b,c,d,…' form. Note that this annotation requires you to use the <binding>
notation, unlike @Bind
(which uses :binding)
@SqlQuery("select name from users where id in (<userIds>)")
List<String> getFromIds(@BindList("userIds") List<Long> userIds)
You can bind from the entries of a Map
:
@SqlUpdate("insert into users (id, name) values (:id, :name)")
void insert(@BindMap Map<String, ?> map);
In SQL Object (but not in Core), you can qualify a bound map with a prefix:
@SqlUpdate("insert into users (id, name) values (:user.id, :user.name)")
void insert(@BindMap("user") Map<String, ?> map);
You can bind from the properties of a Java Bean:
@SqlUpdate("insert into users (id, name) values (:id, :name)")
void insert(@BindBean User user);
You can bind from the public fields of an object:
@SqlUpdate("insert into users (id, name) values (:id, :name)")
void insert(@BindFields User user);
Or you can bind from public, parameterless methods of an object:
@SqlUpdate("insert into users (id, name) values (:functionThatReturnsTheId, :functionThatReturnsTheName)")
void insert(@BindMethods User user);
Like @BindMap
, the @BindBean
, @BindMethods
, and @BindFields
annotations
can have an optional prefix:
@SqlUpdate("insert into users (id, name) values (:user.id, :user.name)")
void insert(@BindBean("user") User user);
//void insert(@BindFields("user") User user);
//void insert(@BindMethods("user") User user);
As in the Core API, @BindBean , @BindFields , and @BindMethods may be used
to bind nested properties, e.g. :user.address.street .
|
@BindMap does not bind nested properties—map keys are expected to exactly
match the bound parameter name.
|
6.1.3. @SqlQuery
Use the @SqlQuery
annotation for select operations.
Query methods may return a single- or multi-row result, depending on whether the method return type looks like a collection.
public interface UserDao {
@SqlQuery("select name from users")
List<String> listNames(); (1)
@SqlQuery("select name from users where id = ?")
String getName(long id); (2) (3)
@SqlQuery("select name from users where id = ?")
Optional<String> findName(long id); (4)
}
1 | When a multi-row method returns an empty result set, an empty collection is returned. |
2 | If a single-row method returns multiple rows from the query, only the first row in the result set is returned from the method. |
3 | If a single-row method returns an empty result set, null is returned. |
4 | Methods may return Optional values. If the query returns no rows (or if
the value in the row is null), Optional.empty() is returned instead of null.
SQL Object throws an exception if query returns more than one row. |
Jdbi can be "taught" to recognize new collection types by registering a CollectorFactory with the [JdbiCollectors] config registry. |
See
BuiltInCollectorFactory
for the complete list of collection types supported out of the box. Certain
Jdbi plugins (e.g. GuavaPlugin
) register additional collection types.
Query methods may also return a ResultIterable,
ResultIterator, or a
Stream
.
public interface UserDao {
@SqlQuery("select name from users")
ResultIterable<String> getNamesAsIterable();
@SqlQuery("select name from users")
ResultIterator<String> getNamesAsIterator();
@SqlQuery("select name from users")
Stream<String> getNamesAsStream();
}
The objects returned from these methods hold database resources that must be explicitly closed when you are done with them. We strongly recommend the use of try-with-resource blocks when calling these methods, to prevent resource leaks:
try (ResultIterable<String> names = dao.getNamesAsIterable()) {
...
}
try (ResultIterator<String> names = dao.getNamesAsIterator()) {
...
}
try (Stream<String> names = dao.getNamesAsStream()) {
...
}
ResultIterable , ResultIterator and Stream methods do not play nice with
on-demand SQL Objects. Unless the methods are called in a nested way (see
On-Demand), the returned object will already be closed.
|
@RegisterRowMapper
Use @RegisterRowMapper
to register a concrete row mapper class:
public interface UserDao {
@SqlQuery("select * from users")
@RegisterRowMapper(UserMapper.class)
List<User> list();
}
Row mappers used with this annotation must meet a few requirements:
public class UserMapper implements RowMapper<User> { (1) (2)
public UserMapper() { (3)
...
}
public T map(ResultSet rs, StatementContext ctx) throws SQLException {
...
}
}
1 | Must be a public class. |
2 | Must implement RowMapper with an explicit type argument (e.g.
RowMapper<User> ) instead of a type variable (e.g. RowMapper<T> ). |
3 | Must have a public, no-argument constructor (or a default constructor). |
The @RegisterRowMapper annotation may be repeated multiple times on the same
type or method to register multiple mappers.
|
@RegisterRowMapperFactory
Use @RegisterRowMapperFactory
to register a RowMapperFactory
.
public interface UserDao {
@SqlQuery("select * from users")
@RegisterRowMapperFactory(UserMapperFactory.class)
List<User> list();
}
Row mapper factories used with this annotation must meet a few requirements:
public class UserMapperFactory implements RowMapperFactory { (1)
public UserMapperFactory() { (2)
...
}
public Optional<RowMapper<?>> build(Type type, ConfigRegistry config) {
...
}
}
1 | Must be a public class. |
2 | Must have a public, no-argument constructor (or a default constructor). |
The @RegisterRowMapperFactory annotation may be repeated multiple times on the
same type or method to register multiple factories.
|
@RegisterColumnMapper
Use @RegisterColumnMapper
to register a column mapper:
public interface AccountDao {
@SqlQuery("select balance from accounts where id = ?")
@RegisterColumnMapper(MoneyMapper.class)
Money getBalance(long id);
}
Column mappers used with this annotation must meet a few requirements:
public class MoneyMapper implements ColumnMapper<Money> { (1) (2)
public MoneyMapper() { (3)
...
}
public T map(ResultSet r, int columnNumber, StatementContext ctx) throws SQLException {
...
}
}
1 | Must be a public class. |
2 | Must implement ColumnMapper with an explicit type argument (e.g.
ColumnMapper<User> ) instead of a type variable (e.g. ColumnMapper<T> ). |
3 | Must have a public, no-argument constructor (or a default constructor). |
The @RegisterColumnMapper annotation may be repeated multiple times on the
same type or method to register multiple mappers.
|
@RegisterColumnMapperFactory
Use @RegisterColumnMapperFactory
to register a column mapper factory:
public interface AccountDao {
@SqlQuery("select * from users")
@RegisterColumnMapperFactory(MoneyMapperFactory.class)
List<User> list();
}
Row mapper factories used with this annotation must meet a few requirements:
public class UserMapperFactory implements RowMapperFactory { (1)
public UserMapperFactory() { (2)
...
}
public Optional<RowMapper<?>> build(Type type, ConfigRegistry config) {
...
}
}
1 | Must be a public class. |
2 | Must have a public, no-argument constructor (or a default constructor). |
The @RegisterColumnMapperFactory annotation may be repeated multiple times on
the same type or method to register multiple factories.
|
@RegisterBeanMapper
Use @RegisterBeanMapper
to register a BeanMapper for a bean class:
public interface UserDao {
@SqlQuery("select * from users")
@RegisterBeanMapper(User.class)
List<User> list();
}
Using the prefix
attribute causes the bean mapper to map only those columns
that begin with the prefix:
public interface UserDao {
@SqlQuery("select u.id u_id, u.name u_name, r.id r_id, r.name r_name " +
"from users u left join roles r on u.role_id = r.id")
@RegisterBeanMapper(value = User.class, prefix = "u")
@RegisterBeanMapper(value = Role.class, prefix = "r")
Map<User,Role> getRolesPerUser();
}
In this example, the User
mapper will map the columns u_id
and u_name
into
the User.id
and User.name
properties. Likewise for r_id
and r_name
into
Role.id
and Role.name
, respectively.
The @RegisterBeanMapper annotation may be repeated (as demonstrated above) on
the same type or method to register multiple bean mappers.
|
@RegisterConstructorMapper
Use @RegisterConstructorMapper
to register a ConstructorMapper for classes
that are instantiated with all properties through the constructor.
public interface UserDao {
@SqlQuery("select * from users")
@RegisterConstructorMapper(User.class)
List<User> list();
}
Using the prefix
attribute causes the constructor mapper to only map those
columns that begin with the prefix:
public interface UserDao {
@SqlQuery("select u.id u_id, u.name u_name, r.id r_id, r.name r_name " +
"from users u left join roles r on u.role_id = r.id")
@RegisterConstructorMapper(value = User.class, prefix = "u")
@RegisterConstructorMapper(value = Role.class, prefix = "r")
Map<User,Role> getRolesPerUser();
}
In this example, the User
mapper will map the columns u_id
and u_name
into
the id
and name
parameters of the User
constructor. Likewise for r_id
and r_name
into id
and name
parameters of the Role
constructor,
respectively.
The @RegisterConstructorMapper annotation may be repeated multiple times on
the same type or method to register multiple constructor mappers.
|
@RegisterFieldMapper
Use @RegisterFieldMapper
to register a FieldMapper for a given class.
public interface UserDao {
@SqlQuery("select * from users")
@RegisterFieldMapper(User.class)
List<User> list();
}
Using the prefix
attribute causes the field mapper to only map those columns
that begin with the prefix:
public interface UserDao {
@SqlQuery("select u.id u_id, u.name u_name, r.id r_id, r.name r_name " +
"from users u left join roles r on u.role_id = r.id")
@RegisterFieldMapper(value = User.class, prefix = "u")
@RegisterFieldMapper(value = Role.class, prefix = "r")
Map<User,Role> getRolesPerUser();
}
In this example, the User
mapper will map the columns u_id
and u_name
into
the User.id
and User.name
fields. Likewise for r_id
and r_name
into the
Role.id
and Role.name
fields, respectively.
The @RegisterConstructorMapper annotation may be repeated multiple times on
the same type or method to register multiple constructor mappers.
|
@SingleValue
Sometimes when using advanced SQL features like Arrays, a container type like
int[]
or List<Integer>
can ambiguously mean either "a single SQL int[]" or
"a ResultSet of int".
Since arrays are not commonly used in normalized schema, SQL Object assumes by
default that you are collecting a ResultSet into a container object. You can
annotate a return type as @SingleValue
to override this.
For example, suppose we want to select a varchar[]
column from a single row:
public interface UserDao {
@SqlQuery("select roles from users where id = ?")
@SingleValue
List<String> getUserRoles(long userId)
}
Normally, Jdbi would interpret List<String>
to mean that the mapped type is
String
, and to collect all result rows into a list. The @SingleValue
annotation causes Jdbi to treat List<String>
as the mapped type instead.
It’s tempting to @SingleValue Optional<String> , but usually this isn’t needed.
Optional is implemented as a container of zero-or-one elements. Adding @SingleValue
implies that the database itself has a column of a type like optional<varchar> .
|
Map<K,V> Results
SQL Object methods may return Map<K,V>
types (see Map.Entry mapping in
the Core API). In this scenario, each row is mapped to a Map.Entry<K,V>
,
and the entries for each row are collected into a single Map
instance.
A mapper must be registered for both the key and value types. |
Gather master/detail join rows into a map, simply by registering mappers for the key and value types.
@SqlQuery("select u.id u_id, u.name u_name, p.id p_id, p.phone p_phone "
+ "from \"user\" u left join phone p on u.id = p.user_id")
@RegisterConstructorMapper(value = User.class, prefix = "u")
@RegisterConstructorMapper(value = Phone.class, prefix = "p")
Map<User, Phone> getMap();
In the preceding example, the User
mapper uses a "u" column name prefix, and
the Phone
mapper uses "p". Since each mapper only reads the column with the
expected prefix, the respective id
columns are unambigous.
A unique index (e.g. by ID column) can be obtained by setting the key column name:
@SqlQuery("select * from \"user\"")
@KeyColumn("id")
@RegisterConstructorMapper(User.class)
Map<Integer, User> getAll();
Set both the key and value column names to gather a two-column query into a map result:
@SqlQuery("select \"key\", \"value\" from config")
@KeyColumn("key")
@ValueColumn("value")
Map<String, String> getAll();
All of the above examples assume a one-to-one key/value relationship.
What if there is a one-to-many relationship? Google Guava provides a Multimap
type, which supports mapping multiple values per key.
First, follow the instructions in the Google Guava section to install
GuavaPlugin
.
Then, simply specify a Multimap
return type instead of Map
:
@SqlQuery("select u.id u_id, u.name u_name, p.id p_id, p.phone p_phone "
+ "from \"user\" u left join phone p on u.id = p.user_id")
@RegisterConstructorMapper(value = User.class, prefix = "u")
@RegisterConstructorMapper(value = Phone.class, prefix = "p")
Multimap<User, Phone> getMultimap();
All the examples so far have been Map
types where each row in the result set
is a single Map.Entry
. However, what if the Map
we want to return is
actually a single row or even a single column?
Jdbi’s MapMapper maps each row to a
Map<String, Object>
, where column names are mapped to column values.
Jdbi’s default setting is to convert column names to lowercase for Map keys. This behavior can be
changed via the MapMappers config class.
|
By default, SQL Object treats Map
return types as a collection of Map.Entry
values. Use the @SingleValue
annotation to override this, so that the return
type is treated as a single value instead of a collection:
@SqlQuery("select * from users where id = ?")
@RegisterRowMapper(MapMapper.class)
@SingleValue
Map<String, Object> getById(long userId);
Starting with Jdbi 3.6.0, there’s the
GenericMapMapperFactory,
which provides the same feature but allows value types other than Object
as long as a suitable ColumnMapper is registered and all columns are of this type:
@SqlQuery("select 1.0 as LOW, 2.0 as MEDIUM, 3.0 as HIGH")
@RegisterRowMapperFactory(GenericMapMapperFactory.class)
@SingleValue
Map<String, BigDecimal> getNumericLevels();
Do you use PostgreSQL’s hstore columns? The PostgreSQL plugin provides an
hstore to Map<String, String> column mapper. See
hstore for more information.
|
@UseRowReducer
@SqlQuery
methods that use join queries may reduce master-detail joins
into one or more master-level objects. See ResultBearing.reduceRows() for
an introduction to row reducers.
Consider a filesystem metaphor with folders and documents. In the join, we’ll
prefix folder columns with f_
and document columns with d_
.
@RegisterBeanMapper(value = Folder.class, prefix = "f") (1)
@RegisterBeanMapper(value = Document.class, prefix = "d")
public interface DocumentDao {
@SqlQuery("select " +
"f.id f_id, f.name f_name, " +
"d.id d_id, d.name d_name, d.contents d_contents " +
"from folders f left join documents d " +
"on f.id = d.folder_id " +
"where f.id = :folderId" +
"order by d.name")
@UseRowReducer(FolderDocReducer.class) (2)
Optional<Folder> getFolder(int folderId); (3)
@SqlQuery("select " +
"f.id f_id, f.name f_name, " +
"d.id d_id, d.name d_name, d.contents d_contents " +
"from folders f left join documents d " +
"on f.id = d.folder_id " +
"order by f.name, d.name")
@UseRowReducer(FolderDocReducer.class) (2)
List<Folder> listFolders(); (3)
class FolderDocReducer implements LinkedHashMapRowReducer<Integer, Folder> { (4)
@Override
public void accumulate(Map<Integer, Folder> map, RowView rowView) {
Folder f = map.computeIfAbsent(rowView.getColumn("f_id", Integer.class), (5)
id -> rowView.getRow(Folder.class));
if (rowView.getColumn("d_id", Integer.class) != null) { (6)
f.getDocuments().add(rowView.getRow(Document.class));
}
}
}
}
1 | In this example, we register the folder and document mappers with a
prefix, so that each mapper only looks at the columns with that prefix.
These mappers are used indirectly by the row reducer in the
getRow(Folder.class) and getRow(Document.class) calls. |
2 | Annotate the method with @UseRowReducer , and specify the RowReducer
implementation class. |
3 | The same RowReducer implementation may be used for both single- and
multi-master-record queries. |
4 | LinkedHashMapRowReducer
is an abstract RowReducer implementation that uses a LinkedHashMap as the result
container, and returns the values() collection as the result. |
5 | Get the Folder for this row from the map by ID, or create it if not in the map. |
6 | Confirm this row has a document (this is a left join) before mapping a document and adding it to the folder. |
6.1.4. @SqlBatch
Use the @SqlBatch
annotation for bulk update operations. @SqlBatch
is
analogous to PreparedBatch in Core.
public interface ContactDao {
@SqlBatch("insert into contacts (id, name, email) values (?, ?, ?)")
void bulkInsert(List<Integer> ids,
Iterator<String> names,
String... emails);
}
Batch parameters may be collections, iterables, iterators, arrays (including varargs). We’ll call these "iterables" for brevity.
When a batch method is called, SQL Object iterates through the method’s iterable parameters, and executes the SQL statement with the corresponding elements from each parameter.
Thus a statement like:
contactDao.bulkInsert(
ImmutableList.of(1, 2, 3),
ImmutableList.of("foo", "bar", "baz").iterator(),
"a@example.com", "b@example.com", "c@fake.com");
would execute:
insert into contacts (id, name, email) values (1, 'foo', 'a@example.com');
insert into contacts (id, name, email) values (2, 'bar', 'b@example.com');
insert into contacts (id, name, email) values (3, 'baz', 'c@fake.com');
Constant values may also be used as parameters to a SQL batch. In this case, the same value is bound to that parameter for every SQL statement in the batch.
public interface UserDao {
@SqlBatch("insert into users (tenant_id, id, name) " +
"values (:tenantId, :user.id, :user.name)")
void bulkInsert(@Bind("tenantId") long tenantId, (1)
@BindBean("user") User... users);
}
1 | Insert each user record using the same tenant_id . |
@SqlBatch methods must have at least one iterable parameter.
|
By default, a @SqlBatch
method may return a handful of types:
-
void
: returns nothing (obviously) -
int[]
orlong[]
: returns the update count per execution in the batch. Depending on the database vendor and JDBC driver, this may be either the number of rows changed by a statement, or the number matched by the query (regardless of whether any data was changed). -
boolean[]
: returns true if the update count is greater than zero, one value for each execution in the batch.
@GetGeneratedKeys
Similar to @SqlUpdate
, the @GetGeneratedKeys
annotations tells SQL Object
that the return value should be the generated keys from each SQL statement,
instead of the update count. See
@GetGeneratedKeys for a more thorough discussion.
public interface UserDao {
@SqlBatch("insert into users (id, name) values (nextval('user_seq'), ?)")
@GetGeneratedKeys("id")
long[] bulkInsert(List<String> names); (1)
}
1 | Returns the generated ID for each inserted name. |
Multiple columns may be generated and returned in this way:
public interface UserDao {
@SqlBatch("insert into users (id, name, created_on) values (nextval('user_seq'), ?, now())")
@GetGeneratedKeys({"id", "created_on"})
@RegisterBeanMapper(IdCreateTime.class)
List<IdCreateTime> bulkInsert(String... names);
}
@SingleValue
In some cases you may want an iterable parameter to be treated as a
constant—use the @SingleValue
annotation on the method parameter. This causes
SQL Object to bind the whole iterable as the parameter value for every SQL
statement in the batch (typically as a SQL Array parameter).
public interface UserDao {
@SqlBatch("insert into users (id, name, roles) values (?, ?, ?)")
void bulkInsert(List<Long> ids,
List<String> names,
@SingleValue List<String> roles);
}
In the above example, each new row would get the same varchar[]
value in the
roles
column.
6.1.5. @SqlCall
Use the @SqlCall
annotation to call stored procedures.
public interface AccountDao {
@SqlCall("{call suspend_account(:id)}")
void suspendAccount(@Bind("id") long accountId);
}
@SqlCall
methods can return void
, or may return OutParameters
if the
stored procedure has any output parameters. Each output parameter must be
registered with the @OutParameter
annotation.
public interface OrderDao {
@SqlCall("{call prepare_order_from_cart(:cartId, :orderId, :orderTotal)}")
@OutParameter(name = "orderId", sqlType = java.sql.Types.BIGINT)
@OutParameter(name = "orderTotal", sqlType = java.sql.Types.DECIMAL)
OutParameters prepareOrderFromCart(@Bind("cartId") long cartId);
}
Individual output parameters can be extracted from the OutParameters returned from the method:
OutParameters outParams = orderDao.prepareOrderFromCart(cartId);
long orderId = outParams.getLong("orderId");
double orderTotal = outParams.getDouble("orderTotal");
By passing a Consumer<OutParameters>
or a Function<OutParameters, T>
,
you may process the result before the statement is closed. This is useful
to process cursor-typed results.
6.1.6. @SqlScript
Use @SqlScript
to execute one or more statements in a batch. You can define attributes
for the template engine to use.
@SqlScript("CREATE TABLE <name> (pk int primary key)")
void createTable(@Define String name);
@SqlScript("INSERT INTO cool_table VALUES (5), (6), (7)")
@SqlScript("DELETE FROM cool_table WHERE pk > 5")
int[] doSomeUpdates(); // returns [ 3, 2 ]
@UseClasspathSqlLocator // load external SQL!
@SqlScript // use the method name
@SqlScript("secondScript") // or specify it yourself
int[] externalScript();
6.1.7. @GetGeneratedKeys
The @GetGeneratedKeys
annotation may be used on a @SqlUpdate
or @SqlBatch
method to return the keys generated from the SQL statement:
public void sqlObjectBatchKeys() {
db.useExtension(UserDao.class, dao -> {
List<User> users = dao.createUsers("Alice", "Bob", "Charlie");
assertEquals(3, users.size());
assertEquals(1, users.get(0).id);
assertEquals("Alice", users.get(0).name);
assertEquals(2, users.get(1).id);
assertEquals("Bob", users.get(1).name);
assertEquals(3, users.get(2).id);
assertEquals("Charlie", users.get(2).name);
});
}
public interface UserDao {
@SqlBatch("INSERT INTO users (name) VALUES(?)")
@GetGeneratedKeys
List<User> createUsers(String... names);
}
6.1.8. SqlLocator
When SQL statements grow in complexity, it may be cumbersome to provide
the statements as Java strings in @Sql___
method annotations.
Jdbi provides annotations that let you configure external locations to load SQL statements.
-
@UseAnnotationSqlLocator (the default behavior; use @Sql___(…) annotation value)
-
@UseClasspathSqlLocator - loads SQL from a file on the classpath, based on the package and name of the SQL Object interface type.
package com.foo;
@UseClasspathSqlLocator
interface BarDao {
// loads classpath resource com/foo/BarDao/query.sql
@SqlQuery
void query();
}
@UseClasspathSqlLocator is implemented using the ClasspathSqlLocator, as described above.
ClasspathSqlLocator loads files unchanged by default. Using the @UseClasspathSqlLocator annotation will strip out comments by default! This may lead to unexpected behavior, e.g. SQL Server uses the # character to denote temporary tables. This can be controlled with the stripComments annotation attribute.
|
package com.foo;
@UseClasspathSqlLocator(stripComments=false)
interface BarDao {
// loads classpath resource com/foo/BarDao/query.sql without stripping comment lines
@SqlQuery
void query();
}
If you like StringTemplate, the StringTemplate 4 module also provides a SqlLocator, which can load SQL templates from StringTemplate 4 files on the classpath.
6.1.9. @CreateSqlObject
Use the @CreateSqlObject annotation to reuse a SqlObject inside another SqlObject. For example, you can build a transactional method which performs an SQL update defined in other SqlObject as part of the transaction. Jdbi won’t open a new handle for a call to the child SqlObject.
public interface Bar {
@SqlUpdate("insert into bar (name) values (:name)")
@GetGeneratedKeys
int insert(@Bind("name") String name);
}
public interface Foo {
@CreateSqlObject
Bar createBar();
@SqlUpdate("insert into foo (bar_id, name) values (:bar_id, :name)")
void insert(@Bind("bar_id") int barId, @Bind("name") String name);
@Transaction
default void insertBarAndFoo(String barName, String fooName) {
int barId = createBar().insert(barName);
insert(barId, fooName);
}
}
6.1.10. @Timestamped
You can annotate any statement with @Timestamped
to bind an OffsetDateTime
,
of which the value is the current time, under the binding now
:
public interface Bar {
@SqlUpdate("insert into times(val) values(:now)")
@Timestamped
int insert();
}
You can customize the binding name:
@Timestamped("timestamp")
TimestampedConfig gives you control over the timezone used for this.
6.2. Consumer Methods
As a special case, you may provide a Consumer<T>
argument in addition to other bound parameters.
The provided consumer is executed once for each row in the result set.
The static type of parameter T determines the row type.
@SqlQuery("select id, name from users")
void forEachUser(Consumer<User> consumer);
6.3. Default Methods
Occasionally a use case comes up where SQL Method annotations don’t fit. In
these situations, you can "drop down" to the Core API using a Java 8 default
method.
Jdbi provides a SqlObject
mixin interface with a getHandle
method.
Make your SQL Object interface extend the SqlObject
mixin, then provide your
own implementation in a default method:
public interface SplineDao extends SqlObject {
default void reticulateSplines(Spline spline) {
Handle handle = getHandle();
// do tricky stuff using the Core API.
}
}
Default methods can also be used to group multiple SQL operations into a single method call:
public interface ContactPhoneDao {
@SqlUpdate("insert into contacts (id, name) values (nextval('contact_id'), :name)")
long insertContact(@BindBean Contact contact);
@SqlBatch("insert into phones (contact_id, type, phone) values (:contactId, :type, :phone)")
void insertPhone(long contactId, @BindBean Iterable<Phone> phones);
default long insertFullContact(Contact contact) {
long id = insertContact(contact);
insertPhone(id, contact.getPhones());
return id;
}
}
6.4. Transaction Management
You may declare transactions with SqlObject annotations:
@Test
public void sqlObjectTransaction() {
assertThat(handle.attach(UserDao.class).findUserById(3).map(u -> u.name)).contains("Charlie");
}
public interface UserDao {
@SqlQuery("SELECT * FROM users WHERE id=:id")
@Transaction
Optional<User> findUserById(int id);
}
SQL methods with a @Transaction
annotation may optionally specify a
transaction isolation level:
@SqlUpdate("INSERT INTO USERS (name) VALUES (:name)")
@Transaction(TransactionIsolationLevel.READ_COMMITTED)
void insertUser(String name);
If a @Transaction
method calls another @Transaction
method, they must
specify the same isolation level, or the inner method must not specify
anything, in which case the isolation level of the outer method is used.
@Transaction(TransactionIsolationLevel.READ_UNCOMMITTED)
default void outerMethodCallsInnerWithSameLevel() {
// this works: isolation levels agree
innerMethodSameLevel();
}
@Transaction(TransactionIsolationLevel.READ_UNCOMMITTED)
default void innerMethodSameLevel() {}
@Transaction(TransactionIsolationLevel.READ_COMMITTED)
default void outerMethodWithLevelCallsInnerMethodWithNoLevel() {
// this also works: inner method doesn't specify a level, so the outer method controls.
innerMethodWithNoLevel();
}
@Transaction
default void innerMethodWithNoLevel() {}
@Transaction(TransactionIsolationLevel.REPEATABLE_READ)
default void outerMethodWithOneLevelCallsInnerMethodWithAnotherLevel() throws TransactionException {
// error! inner method specifies a different isolation level.
innerMethodWithADifferentLevel();
}
@Transaction(TransactionIsolationLevel.SERIALIZABLE)
default void innerMethodWithADifferentLevel() {}
6.5. Using SQL Objects
Once you’ve defined your interface, there are a few ways to get an instance of it:
6.5.1. Attached to Handle
You can get a SQL Object that is attached to an open handle.
try (Handle handle = jdbi.open()) {
ContactPhoneDao dao = handle.attach(ContactPhoneDao.class);
dao.insertFullContact(contact);
}
Attached SQL Objects have the same lifecycle as the handle—when the handle is closed, the SQL Object becomes unusable.
6.5.2. Temporary SQL Objects
You can also get a temporary SQL Object from the Jdbi object, by passing it a callback (typically a lambda). Use Jdbi.withExtension for operations that return a result, or useExtension for operations with no result.
jdbi.useExtension(ContactPhoneDao.class, dao -> dao.insertFullContact(alice));
long bobId = jdbi.withExtension(ContactPhoneDao.class, dao -> dao.insertFullContact(bob));
Temporary SQL Objects are only valid inside the callback you pass to the method. The SQL Object (and the associated temporary handle) are closed when the callback returns.
6.5.3. On-Demand
On-demand instances have an open-ended lifecycle, as they obtain and release a connection for each method call. They are thread-safe, and may be reused across an application. This is handy when you only need to make single calls at a time.
ContactPhoneDao dao = jdbi.onDemand(ContactPhoneDao.class);
long aliceId = dao.insertFullContact(alice);
long bobId = dao.insertFullContact(bob);
On-demand state is stored in ThreadLocal
storage to imitate lexical scoping.
There is a performance penalty every time a connection is allocated and
released. In the example above, the two insertFullContact
operations take
separate Connection
objects from your database connection pool.
To avoid this, keep the handle open for the duration that you will use your DAO:
dao.useTransaction(txn -> {
User bob = txn.readContact(bobId);
Order order = txn.getOpenOrder(bobId);
txn.createInvoice(computeInvoice(bob, metadata));
});
Interface default
methods, and mix-ins such as
SqlObject
and
Transactional,
allow you to run your code with the on-demand handle held open.
Re-entrant calls on the same thread will receive the same Handle
.
The handle will be closed when the outermost on-demand invocation completes.
Returning cursor-like types such as Stream<T> or Iterable<T> outside of
the outermost on-demand call does not work. Since the Handle is closed,
the database cursor is released and reading will fail.
|
6.6. Additional Annotations
Jdbi provides dozens of annotations out of the box:
-
org.jdbi.v3.sqlobject.config provides annotations for things that can be configured at the
Jdbi
orHandle
level. This includes registration of mappers and arguments, and for configuring SQL statement rendering and parsing. -
org.jdbi.v3.sqlobject.customizer provides annotations for binding parameters, defining attributes, and controlling the fetch behavior of the statement’s result set.
-
org.jdbi.v3.jpa provides the
@BindJpa
annotation, for binding properties to column according to JPA@Column
annotations. -
org.jdbi.v3.sqlobject.locator provides annotations that configure Jdbi to load SQL statements from an alternative source, e.g. a file on the classpath.
-
org.jdbi.v3.sqlobject.statement provides the
@MapTo
annotation, which is used for dynamically specifying the mapped type at the time the method is invoked. -
org.jdbi.v3.stringtemplate4 provides annotations that configure Jdbi to load SQL from StringTemplate 4
.stg
files on the classpath, and/or to parse SQL templates using the ST4 template engine. -
org.jdbi.v3.sqlobject.transaction provides annotations for transaction management in a SQL object. See Transaction Management for details.
Jdbi is designed to support user-defined annotations. See User-Defined Annotations for a guide to creating your own.
6.7. Annotations and Inheritance
SQL Objects inherit methods and annotations from the interfaces they extend:
package com.app.dao;
@UseClasspathSqlLocator (1) (2)
public interface CrudDao<T, ID> {
@SqlUpdate (3)
void insert(@BindBean T entity);
@SqlQuery (3)
Optional<T> findById(ID id);
@SqlQuery
List<T> list();
@SqlUpdate
void update(@BindBean T entity);
@SqlUpdate
void deleteById(ID id);
}
1 | See SqlLocator. |
2 | Class annotations are inherited by subtypes. |
3 | Method and parameter annotations are inherited by subtypes, unless the subtype overrides the method. |
package com.app.contact;
@RegisterBeanMapper(Contact.class)
public interface ContactDao extends CrudDao<Contact, Long> {}
package com.app.account;
@RegisterConstructorMapper(Account.class)
public interface AccountDao extends CrudDao<Account, UUID> {}
In this example we’re using the @UseClasspathSqlLocator
annotation, so each
method will use SQL loaded from the classpath. Thus, ContactDao
methods will
use SQL from:
-
/com/app/contact/ContactDao/insert.sql
-
/com/app/contact/ContactDao/findById.sql
-
/com/app/contact/ContactDao/list.sql
-
/com/app/contact/ContactDao/update.sql
-
/com/app/contact/ContactDao/deleteById.sql
Whereas AccountDao
will use SQL from:
-
/com/app/account/AccountDao/insert.sql
-
/com/app/account/AccountDao/findById.sql
-
/com/app/account/AccountDao/list.sql
-
/com/app/account/AccountDao/update.sql
-
/com/app/account/AccountDao/deleteById.sql
Suppose Account
used name()
-style accessors instead of getName()
. In that
case, we’d want AccountDao
to use @BindMethods
instead of @BindBean
.
Let’s override those methods with the right annotations:
package com.app.account;
@RegisterConstructorMapper(Account.class)
public interface AccountDao extends CrudDao<Account, UUID> {
@Override
@SqlUpdate (1)
void insert(@BindMethods Account entity);
@Override
@SqlUpdate (1)
void update(@BindMethods Account entity);
}
1 | Method annotations are not inherited on override, so we must duplicate those we want to keep. |
7. Testing
The official test support from JDBI is in the jdbi-testing package. There are a number of additional JUnit 5 extensions in the jdbi-core test artifact. These are only intended for JDBI internal use and not part of the official, public API.
|
7.1. JUnit 4
The jdbi3-testing
artifact provides JdbiRule
class which implements TestRule and can be used with the Rule and ClassRule annotations.
It provides helpers for writing JUnit 4 tests integrated with a managed database instance. This makes writing unit tests quick and easy! You must remember to include the database dependency itself, for example to get a pure H2 Java database:
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.4.200</version>
<scope>test</scope>
</dependency>
JUnit 4 supports the OTJ embedded postgres component, which needs to be included when using Postgres:
<dependency>
<groupId>com.opentable.components</groupId>
<artifactId>otj-pg-embedded</artifactId>
<version>0.13.4</version>
<scope>test</scope>
</dependency>
7.2. JUnit 5
The jdbi3-testing
artifact provides JdbiExtension for JUnit 5 based tests.
It supports both the @RegisterExtension and @ExtendWith annotations.
When using @RegisterExtension
, the extensions can be customized further:
public class Test {
@RegisterExtension
public JdbiExtension h2Extension = JdbiExtension.h2()
withPlugin(new SqlObjectPlugin());
@Test
public void testWithJunit5() {
Jdbi jdbi = h2Extension.getJdbi();
Handle handle = h2Extension.openHandle();
[...]
}
}
When using the @ExtendWith
declarative extension, test methods can access the Jdbi
and Handle
objects through method parameters:
@ExtendWith(JdbiH2Extension.class)
public class Test {
@Test
public void testWithJunit5(Jdbi jdbi, Handle handle) {
[...]
}
}
The instances injected are the same instances as returned by getJdbi()
and getSharedHandle()
.
The javadoc for JdbiExtension contains further information on how to customize a programmatically registered instance.
8. Third-Party Integration
8.1. Google Guava
This plugin adds support for the following types:
-
Optional<T>
- registers an argument and mapper. SupportsOptional
for any wrapped typeT
for which a mapper / argument factory is registered. -
Most Guava collection and map types - see GuavaCollectors for a complete list of supported types.
To use this plugin, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-guava</artifactId>
</dependency>
Then install the plugin into your Jdbi
instance:
jdbi.installPlugin(new GuavaPlugin());
With the plugin installed, any supported Guava collection type can be returned from a SQL object method:
public interface UserDao {
@SqlQuery("select * from users order by name")
ImmutableList<User> list();
@SqlQuery("select * from users where id = :id")
com.google.common.base.Optional<User> getById(long id);
}
8.2. H2 Database
This plugin configures Jdbi to correctly handle integer[]
and uuid[]
data
types in an H2 database.
This plugin is included with the core jar (but may be extracted to separate
artifact in the future). Use it by installing the plugin into your Jdbi
instance:
jdbi.installPlugin(new H2DatabasePlugin());
8.3. JSON
The jdbi3-json
module adds a @Json
type qualifier that allows to store arbitrary Java objects as JSON data in your database.
The actual JSON (de)serialization code is not included. For that, you must install a backing plugin (see below).
Backing plugins will install the JsonPlugin for you.
You do not need to install it yourself or include the jdbi3-json dependency directly.
|
The feature has been tested with Postgres json
columns
and varchar
columns in H2 and Sqlite.
8.3.1. Jackson 2
This plugin provides JSON backing through Jackson 2.
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-jackson2</artifactId>
</dependency>
jdbi.installPlugin(new Jackson2Plugin());
// optionally configure your ObjectMapper (recommended)
jdbi.getConfig(Jackson2Config.class).setMapper(myObjectMapper);
// now with simple support for Json Views if you want to filter properties:
jdbi.getConfig(Jackson2Config.class).setView(ApiProperty.class);
8.3.2. Gson 2
This plugin provides JSON backing through Gson 2.
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-gson2</artifactId>
</dependency>
jdbi.installPlugin(new Gson2Plugin());
// optional
jdbi.getConfig(Gson2Config.class).setGson(myGson);
8.3.3. Moshi
This plugin provides JSON backing through Moshi.
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-moshi</artifactId>
</dependency>
jdbi.installPlugin(new MoshiPlugin());
// optional
jdbi.getConfig(MoshiConfig.class).setMoshi(myMoshi);
8.3.4. Operation
Any bound object qualified as @Json
will be converted by the registered
JsonMapper and requalified as
@EncodedJson String
.
A corresponding @EncodedJson ArgumentFactory
will then be called to store the JSON data,
allowing special JSON handling for your database to be implemented.
If none are found, a factory for plain String
will be used instead, to handle the JSON as plaintext.
Mapping works just the same way, but in reverse: an output type qualified as @Json T
will be fetched from a
@EncodedJson String
or String
ColumnMapper, and then passed through the JsonMapper
.
Our PostgresPlugin provides qualified factories that will
bind/map the @EncodedJson String to/from json or jsonb -typed columns.
|
8.3.5. Usage
handle.execute("create table myjsons (id serial not null, value json not null)");
SqlObject:
// any json-serializable type
class MyJson {}
// use @Json qualifier:
interface MyJsonDao {
@SqlUpdate("insert into myjsons (json) values(:value)")
// on parameters
int insert(@Json MyJson value);
@SqlQuery("select value from myjsons")
// on result types
@Json
List<MyJson> select();
}
// also works on bean or property-mapped objects:
class MyBean {
private final MyJson property;
@Json
public MyJson getProperty() { return ...; }
}
With the Fluent API, you provide a QualifiedType<T>
any place you’d normally provide a Class<T>
or GenericType<T>
:
QualifiedType<MyJson> qualifiedType = QualifiedType.of(MyJson.class).with(Json.class);
h.createUpdate("insert into myjsons(json) values(:json)")
.bindByType("json", new MyJson(), qualifiedType)
.execute();
MyJson result = h.createQuery("select json from myjsons")
.mapTo(qualifiedType)
.one();
8.4. Immutables
Immutables is an annotation processor that generates
value types based on simple interface descriptions. The value types naturally map very well
to Jdbi
properties binding and row mapping.
Immutables support is still experimental and does not yet support custom naming schemes.
We do support the configurable get , is , and set prefixes.
|
Just tell us about your types by installing the plugin and configuring your Immutables
type:
jdbi.getConfig(JdbiImmutables.class).registerImmutable(MyValueType.class)
The configuration will both register appropriate RowMapper
s as well as configure the new bindPojo
(or @BindPojo
) binders:
@Value.Immutable
public interface Train {
String name();
int carriages();
boolean observationCar();
}
@Test
public void simpleTest() {
jdbi.getConfig(JdbiImmutables.class).registerImmutable(Train.class);
try (Handle handle = jdbi.open()) {
handle.execute("create table train (name varchar, carriages int, observation_car boolean)");
assertThat(
handle.createUpdate("insert into train(name, carriages, observation_car) values (:name, :carriages, :observationCar)")
.bindPojo(ImmutableTrain.builder().name("Zephyr").carriages(8).observationCar(true).build())
.execute())
.isOne();
assertThat(
handle.createQuery("select * from train")
.mapTo(Train.class)
.one())
.extracting("name", "carriages", "observationCar")
.containsExactly("Zephyr", 8, true);
}
}
8.5. Freebuilder
Freebuilder is an annotation processor that generates value types based on simple interface or abstract class descriptions. Jdbi supports Freebuilder in much the same way that it supports Immutables.
Freebuilder support is still experimental and may not support all Freebuilder implemented features. We do support both JavaBean style getters and setters as well as unprefixed getters and setters. |
Just tell us about your Freebuilder types by installing the plugin and
configuring your Freebuilder
type:
jdbi.getConfig(JdbiFreebuilder.class).registerFreebuilder(MyFreeBuilderType.class)
The configuration will both register appropriate RowMapper`s as well as
configure the new `bindPojo
(or @BindPojo
) binders:
@FreeBuilder
public interface Train {
String name();
int carriages();
boolean observationCar();
class Builder extends FreeBuildersTest_Train_Builder {}
}
@Test
public void simpleTest() {
jdbi.getConfig(JdbiFreeBuilders.class).registerFreeBuilder(Train.class);
try (Handle handle = jdbi.open()) {
handle.execute("create table train (name varchar, carriages int, observation_car boolean)");
Train train = new Train.Builder()
.name("Zephyr")
.carriages(8)
.observationCar(true)
.build();
assertThat(
handle.createUpdate("insert into train(name, carriages, observation_car) values (:name, :carriages, :observationCar)")
.bindPojo(train)
.execute())
.isOne();
assertThat(
handle.createQuery("select * from train")
.mapTo(Train.class)
.one())
.extracting("name", "carriages", "observationCar")
.containsExactly("Zephyr", 8, true);
}
}
8.6. JodaTime
This plugin adds support for using joda-time’s DateTime
type.
To use this plugin, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-jodatime2</artifactId>
</dependency>
Then install the plugin into your Jdbi
instance:
jdbi.installPlugin(new JodaTimePlugin());
8.7. Google Guice
Guice support is experimental. |
The Guice module adds support for configuring and injecting Jdbi
instances in applications and services that use the Google Guice dependency injection framework.
Guice support for Jdbi is split into two module types:
-
Jdbi definition modules extend the AbstractJdbiDefinitionModule class. Each of these modules creates a new
Jdbi
instance which is exposed into the Guice binding process using an annotation. -
Jdbi element configuration modules extend the AbstractJdbiConfigurationModule. These modules contribute elements that are referenced in Jdbi definition modules.
Jdbi definition modules are by far more common. Element configuration modules are completely optional. They are most useful in larger projects. |
8.7.1. Definition modules
Every Jdbi
instance is defined in its own Guice module which extends the AbstractJdbiDefinitionModule base class.
The annotation instance or class used on the constructor is used to bind the resulting Jdbi object:
class GuiceJdbiModule extends AbstractJdbiDefinitionModule {
public GuiceJdbiModule() {
super(GuiceJdbi.class);
}
@Override
protected void configureJdbi() {
// bind a custom row mapper
bindRowMapper().to(CustomRowMapper.class);
// bind a custom column mapper
bindColumnMapper().toInstance(new CustomColumnMapper());
// bind a Codec
bindCodec(String.class).to(Key.get(CustomStringCodec.class, Custom.class));
// bind a custom array type
bindArrayType(CustomArrayType.class).toInstance("custom_array");
// bind a jdbi plugin
bindPlugin().toInstance(new SqlObjectPlugin());
// bind a customizer
bindCustomizer().to(SpecialCustomizer.class);
}
}
class Application {
@Inject
@GuiceJdbi
private Jdbi jdbi;
public static void main(String ... args) {
Injector inj = Guice.createInjector(
new GuiceJdbiModule(),
) binder -> binder.bind(DataSource.class).annotatedWith(GuiceJdbi.class).toInstance(... data source instance ...);
);
inj.injectMembers(this);
}
}
In this example, a new Jdbi
is defined using specific mappers and other customizations. The Jdbi
object is exposed to Guice using the annotation passed on the module constructor (GuiceJdbi
in the example). The AbstractJdbiDefinitionModule supports both annotation instances and classes, similar to Guice itself.
A Jdbi definition module requires that a DataSource object is bound within Guice using the same annotation or annotation class as is passed into the module constructor. Creating this data source is outside the scope of a Jdbi definition module! |
When implementing the configureJdbi() method, a number of convenience methods are available as shown above. These methods return Guice LinkedBindingBuilder instances and allow the full range of bindings that guice supports (classes, instances, providers etc).
Method | Type of binding | JDBI function |
---|---|---|
|
||
Each Jdbi definition module is completely independent and all definitions within the module only apply to the specific Jdbi
instance.
8.7.2. Using Guice injection in Jdbi classes
The Jdbi related guice modules store the various related elements (mappers, codecs etc.) using Multibindings. As a result, it is not possible to use injection directly when constructing mapper instances.
The following example does not work:
class JdbiModule extends AbstractJdbiDefinitionModule {
public JdbiModule() {
super(Names.named("data"));
}
@Override
protected void configureJdbi() {
bindRowMapper().to(BrokenRowMapper.class);
bindColumnMapper().to(CustomStringMapper.class);
}
}
class CustomStringMapper implements ColumnMapper<String> {
@Override
public String map(ResultSet r, int columnNumber, StatementContext ctx) {
long x = r.getLong(columnNumber);
return (x > 1000) ? "Huge" : "Small";
}
}
class BrokenRowMapper implements RowMapper<DataRow> {
private final ColumnMapper<String> stringMapper;
@Inject
public BrokenRowMapper(CustomStringMapper stringMapper) {
this.stringMapper = stringMapper;
}
@Override
public DataRow map(ResultSet rs, StatementContext ctx) {
return new DataRow(rs.getInt("intValue"),
stringMapper.map(rs, "longValue", ctx));
}
}
Guice will report an error that it can not locate the CustomStringMapper
instance. The Guice Jdbi integration manages mappers etc. as groups and the separate instances are not directly accessible for injection. The right way to compose mappers is using the Jdbi configuration (see JdbiConfig), which is configured through Guice:
class JdbiModule extends AbstractJdbiDefinitionModule {
public JdbiModule() {
super(Names.named("data"));
}
@Override
protected void configureJdbi() {
bindRowMapper().to(WorkingRowMapper.class);
bindColumnMapper(CustomStringMapper.TYPE).to(CustomStringMapper.class);
}
}
class CustomStringMapper implements ColumnMapper<String> {
public static QualifiedType<String> TYPE = QualifiedType.of(String.class).with(Names.named("data"));
@Override
public String map(ResultSet r, int columnNumber, StatementContext ctx) throws SQLException {
long x = r.getLong(columnNumber);
return (x > 1000) ? "Huge" : "Small";
}
}
class WorkingRowMapper implements RowMapper<DataRow> {
private ColumnMapper<String> stringMapper;
@Override
public void init(ConfigRegistry registry) {
this.stringMapper = registry.get(ColumnMappers.class).findFor(CustomStringMapper.TYPE)
.orElseThrow(IllegalStateException::new);
}
@Override
public DataRow map(ResultSet rs, StatementContext ctx) throws SQLException {
return new DataRow(rs.getInt("intValue"),
stringMapper.map(rs, "longValue", ctx));
}
}
This limitation only applies to bindings that are made in through the various bindXXX()
methods in Jdbi specific modules. Any other binding is available for injection into Jdbi elements:
class ThresholdMapper implements ColumnMapper<String> {
public static QualifiedType<String> TYPE = QualifiedType.of(String.class).with(Threshold.class);
private final long threshold;
// Injection of a named constant here.
@Inject
ThresholdMapper(@Threshold long threshold) {
this.threshold = threshold;
}
@Override
public String map(ResultSet r, int columnNumber, StatementContext ctx) throws SQLException {
long x = r.getLong(columnNumber);
return (x > threshold) ? "Huge" : "Small";
}
}
class JdbiModule extends AbstractJdbiDefinitionModule {
public JdbiModule() {
super(Data.class);
}
@Override
protected void configureJdbi() {
bindColumnMapper(CustomStringMapper.TYPE).to(CustomStringMapper.class);
}
}
Injector inj = Guice.createInjector(
new JdbiModule(),
// define injected constant here
binder -> binder.bindConstant().annotatedWith(Threshold.class).to(5000L);
)
8.7.3. Jdbi customization using Guice
Many Jdbi specific settings can be configured through the bindXXX()
methods available on Jdbi modules (row mappers, column mappers, codecs, plugins etc.)
However, there are additional features that may not be available through these methods. For these use cases, the GuiceJdbiCustomizer interface can be used.
Instances that implement this interface can be added to Jdbi modules using the bindCustomizer() method.
Every customizer will get the Jdbi instance passed at construction time and may modify any aspect before it gets exposed to other parts of the application.
class GuiceCustomizationModule extends AbstractJdbiDefinitionModule {
public GuiceCustomizationModule() {
super(Custom.class);
}
@Override
protected void configureJdbi() {
bindCustomizer().to(MyCustomizer.class);
}
}
class MyCustomizer implements GuiceJdbiCustomizer {
@Override
public void customize(Jdbi jdbi) {
// set the logger to use Slf4j
jdbi.setSqlLogger(new Slf4JSqlLogger());
}
}
In combination with Jdbi configuration modules, these customizers allow easy enforcement of standard configurations for all Jdbi instances in larger projects.
8.7.4. Element configuration modules
Element configuration modules are completely optional and should not be used when only a single Jdbi instance is required. They are intended to help with code organization in larger projects that have more complex needs. |
All bindings in a Jdbi module that defines a Jdbi
object are local to that module. This is useful if all Jdbi related code can be grouped around the module. In larger projects, some parts of the code (and their Jdbi related elements such as row and column mappers) may be located in different part of the code base.
In larger projects, generic mappers should be available for multiple Jdbi instances. This leads often to a proliferation of small modules that only contain such generic code and is in turn imported into every code module that wants to use them.
To support modular code design, any part of a code base that wants to contribute Jdbi specific classes such as mappers to the overall system can use an element configuration module to expose these to all Jdbi instances in a project.
Jdbi element configuration modules extend AbstractJdbiConfigurationModule and can define mappers, plugins etc. similar to a Jdbi definition module. Anything that is registered in such a module is global and will be applied to all instances even if they are defined in another module.
class DomainModule extends AbstractJdbiConfigurationModule {
@Override
protected void configureJdbi() {
bindRowMapper().to(DomainMapper.class);
}
}
class DomainMapper implements RowMapper<DomainObject> {
private ColumnMapper<UUID> uuidMapper;
@Override
public void init(ConfigRegistry registry) {
this.uuidMapper = registry.get(ColumnMappers.class).findFor(UUID.class)
.orElseThrow(IllegalStateException::new);
}
@Override
public DomainObject map(ResultSet rs, StatementContext ctx) throws SQLException {
return new DomainObject(
uuidMapper.map(rs, "id", ctx),
rs.getString("name"),
rs.getString("data"),
rs.getInt("value"),
rs.getBoolean("flag"));
}
}
If the DomainModule
is bound within Guice, then all configured Jdbi
instances will be able to map DomainObject instances without having to configure them explicitly as a row mapper.
Multiple modules extending AbstractJdbiConfigurationModule can be installed in a single injector; the resulting bindings will be aggregated.
It is not possible to install a configuration module from within the configureJdbi method of a definition module using the install() or binder().install() methods!
Definition modules are Guice private modules and anything defined within them will not be exposed to the general dependency tree. This is a limitation due to the way Guice works.
|
8.7.5. Advanced Topics
Exposing additional bindings
Each definition module that defines a Jdbi
instance keeps all bindings private and exposes only the actual Jdbi
object itself. This allows the installation of multiple modules where each definition is completely independent. Sometimes it is useful to attach additional objects and expose them using the same annotations. The most common use cases are data access objects.
Consider a use case where two DataSource instances exist, one annotated as Writer
and the other as Reader
. Both are accessing databases with the same schema, and it makes sense to have two data access objects that are identical except that they are using the different data sources (this is often referred to as the "robot legs" problem of dependency injection).
interface DatabaseAccess {
@SqlUpdate("INSERT INTO data_table ....")
int insert(...);
@SqlQuery("SELECT * FROM data_table")
Data select();
}
To bind two instances of this data access object and connect each to the appropriate Jdbi instance, add the binding to the Jdbi definition module and expose it with exposeBinding(Class<?>) or exposeBinding(TypeLiteral<?>):
class DatabaseModule extends AbstractJdbiDefinitionModule {
public DatabaseModule(Class<? extends Annotation> a) {
super(a);
}
@Provides
@Singleton
DatabaseAccess createDatabaseAccess(Jdbi jdbi) {
return jdbi.onDemand(DatabaseAccess.class);
}
@Override
public void configureJdbi() {
... bind mappers, plugins etc. ...
exposeBinding(DatabaseAccess.class);
}
}
Now install the module multiple times with different annotation classes:
Injector injector = Guice.createInjector(
// bind existing data sources
binder -> binder.bind(DataSource.class).annotatedWith(Reader.class).toInstance(...);
binder -> binder.bind(DataSource.class).annotatedWith(Writer.class).toInstance(...);
new DatabaseModule(Reader.class),
new DatabaseModule(Writer.class)
);
// fetch object directly from the injector
DatabaseAccess readerAccess = injector.getInstance(Key.get(DatabaseAccess.class, Reader.class));
DatabaseAccess writerAccess = injector.getInstance(Key.get(DatabaseAccess.class, Writer.class));
Importing external bindings
The main use case of guice is code modularization and code reuse. Jdbi definition modules can pull dependencies out of the global dependency definitions and using the importBinding
and importBindingLoosely
methods.
importBinding
requires a dependency to exist and pulls it into the definition module. The dependency must be defined using the same annotation or annotation class as the definition module uses.
This example shows how to define an external dependency (SpecialLogger
, annotated with Store
) in a different module and then pull it into the definition module using importBinding
:
//
// This is logging code that can be located e.g. in a specific part of the code base that
// deals with all aspects of logging. The Logging module creates the binding for the special
// logger depending on the environment that the code has been deployed in.
class LoggingModule extends AbstractModule {
private final boolean production;
private final Class<? extends Annotation> annotation;
LoggingModule(boolean production, Class<? extends Annotation> annotation) {
this.production = production;
this.annotation = annotation;
}
@Override
public void configure() {
if (production) {
bind(SpecialLogger.class).annotatedWith(annotation).toInstance(new MaskingLogger());
} else {
bind(SpecialLogger.class).annotatedWith(annotation).toInstance(new DebugLogger());
}
}
}
//
// This is Jdbi code that deals with the data store. It can be located in a different part of the
// application. It requires the "SpecialLogger" dependency to be bound somewhere.
//
@Singleton
class JdbiSpecialLogging implements GuiceJdbiCustomizer {
private final SpecialLogger logger;
@Inject
JdbiSpecialLogging(SpecialLogger logger) {
this.logger = logger;
}
@Override
public void customize(Jdbi jdbi) {
jdbi.setSqlLogger(new SpecialSqlLogger(logger));
}
}
class DatabaseModule extends AbstractJdbiDefinitionModule {
public DatabaseModule(Class<? extends Annotation> a) {
super(a);
}
@Override
public void configureJdbi() {
... bind mappers, plugins etc. ...
// pull the "SpecialLogger" annotated with Store into the module scope
importBinding(SpecialLogger.class).in(Scopes.SINGLETON);
bindCustomizer().to(JdbiSpecialLogging.class);
}
}
Injector injector = Guice.createInjector(
new LoggingModule(production, Store.class),
new DatabaseModule(Store.class)
)
importBinding
returns a binding builder, that allows different binding styles:
class DatabaseModule extends AbstractJdbiDefinitionModule {
@Override
public void configureJdbi() {
// simplest use case, pull in Foo.class using the same annotation
importBinding(Foo.class);
// supports everything that a ScopedBindingBuilder does
importBinding(Foo.class).in(Scopes.SINGLETON);
// supports "to()" to bind interfaces to implementation
importBinding(Foo.class).to(FooImpl.class);
// supports type literals
importBinding(new TypeLiteral<Set<Foo>>() {}).to(FooSet.class);
// supports binding into the various binder methods as well
// pull SpecialCustomizer using the same annotation as the module and add it to the set of customizers
importBinding(bindCustomizer(), SpecialCustomizer.class).in(Scopes.SINGLETON);
// bind column mapper using a type literal
importBinding(bindColumnMapper(), new TypeLiteral<Map<String, Object>>() {}).to(JsonMapper.class);
}
}
Static bindings require that the dependency is always defined. However, it is often desirable to have optional bindings that do not need to exist. This is supported using the importLooseBinding
mechanism.
class DropwizardStoreModule extends AbstractModule {
@Store
@Provides
@Singleton
DropwizardJdbiSupport getDropwizardJdbiSupport(@Dropwizard DataSourcConfiguration configuration, @Dropwizard Environment environment) {
return new DropwizardJdbiSupport("store", configuration, environment);
}
static class DropwizardJdbiSupport implements GuiceJdbiCustomizer {
private final String name;
private final Environment environment;
private final DataSourceConfiguration<?> configuration;
DropwizardJdbiSupport(String name, DataSourceConfiguration configuration, Environment environment) {
this.name = name;
this.configuration = configuration;
this.environment = environment;
}
@Override
public void customize(final Jdbi jdbi) {
final String validationQuery = configuration.getValidationQuery();
environment.healthChecks().register(name, new JdbiHealthCheck(
environment.getHealthCheckExecutorService(),
configuration.getValidationQueryTimeout().orElse(Duration.seconds(5)),
jdbi,
Optional.of(validationQuery)));
}
}
class StoreJdbiModule extends AbstractJdbiDefinitionModule {
@Override
public void configureJdbi() {
... other JDBI code bindings for the store ...
importBindingLoosely(bindCustomizer(), GuiceJdbiCustomizer.class)
.withDefault(GuiceJdbiCustomizer.NOP)
.to(DropwizardJdbiSupport.class);
}
}
// production code (running in dropwizard framework)
Injector injector = Guice.createInjector(
new DropwizardModule(), // binds @Dropwizard stuff
new DropwizardStoreModule(), // binds dropwizard support for store jdbi
new StoreDataSourceModule(), // binds @Store DataSource
new StoreJdbiModule() // Store Jdbi code
);
// test code
Injector injector = Guice.createInjector(
new StoreTestingDataSourceModule(), // testing datasource for store
new StoreJdbiModule() // store Jdbi code
);
In this example there is code specific to the dropwizard framework that would not work in unit tests (that are not run within the framework). This code is only bound in the production environment using the DropwizardStoreModule
and not present in testing.
The StoreJdbiModule
uses importBindingLoosely
to pull in the DropwizardJdbiSupport
binding using the Store
annotation if it exists or uses a No-Op otherwise.
importBindingLoosely
allows for full decoupling of optional dependencies without having to resort to conditionals or separate testing modules.
class DatabaseModule extends AbstractJdbiDefinitionModule {
@Override
public void configureJdbi() {
// simplest use case, pull in Foo.class using the same annotation
importBindingLoosely(Foo.class);
// supports everything that a ScopedBindingBuilder does
importBindingLoosely(Foo.class).in(Scopes.SINGLETON);
// supports "to()" to bind interfaces to implementation
importBindingLoosely(Foo.class).to(FooImpl.class);
// supports default value that is used if the binding
// is not present
importBindingLoosely(Foo.class)
.withDefault(new Foo("default"));
// supports type literals
importBindingLoosely(new TypeLiteral<Set<Foo>>() {}).to(FooSet.class);
// supports binding into the various binder methods as well
// pull SpecialCustomizer using the same annotation as the module and add it to the set of customizers
importBindingLoosely(bindCustomizer(), SpecialCustomizer.class).in(Scopes.SINGLETON);
// bind column mapper using a type literal
importBindingLoosely(bindColumnMapper(), new TypeLiteral<Map<String, Object>>() {}).to(JsonMapper.class);
// full example
importBindingLoosely(bindCustomizer(), GuiceJdbiCustomizer.class)
.withDefault(GuiceJdbiCustomizer.NOP)
.to(SpecialCustomizer.class)
.asEagerSingleton();
}
}
Custom element configuration modules
In larger projects, Element configuration modules help to organize the various Jdbi related elements. By default, all modules contribute their configuration to a single, global configuration that is used in all Jdbi definition modules.
Sometimes it is useful to create separate configurations that only affect a subset of Jdbi definitions. This can be done by using a custom annotation for both the Jdbi element configuration and the Jdbi definition modules:
class CustomConfigurationModule extends AbstractJdbiConfigurationModule {
CustomModule() {
super(CustomConfiguration.class); // definition modules must use this annotation explictly
}
@Override
public void configureJdbi() {
bindColumnMapper().to(CustomColumnMapper.class);
bindPlugin().to(SpecialDatabaseModule.class);
}
}
class SpecialDatabaseModule extends AbstractJdbiDefinitionModule {
SpecialDatabaseModule() {
super(
SpecialDatabase.class, // The Jdbi instance is bound using this annotation class
CustomConfiguration.class // Use an explicit configuration
);
}
@Override
public void configureJdbi() {
...
}
}
The Jdbi element bound with the SpecialDatabase
annotation will have the SpecialDatabaseModule
loaded and can use the CustomColumnMapper
.
8.8. JPA
Using the JPA plugin is a great way to trick your boss into letting you try Jdbi. "No problem boss, it already supports JPA annotations, easy peasy!"
This plugin adds mapping support for a small subset of JPA entity annotations:
-
Entity
-
MappedSuperclass
-
Column
To use this plugin, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-jpa</artifactId>
</dependency>
Then install the plugin into your Jdbi
instance:
jdbi.installPlugin(new JpaPlugin());
Honestly though.. just tear off the bandage and switch to Jdbi proper.
8.9. Kotlin
Kotlin support is provided by jdbi3-kotlin and jdbi3-kotlin-sqlobject modules.
Kotlin API documentation:
8.9.1. ResultSet mapping
The jdbi3-kotlin plugin adds mapping to Kotlin data classes. It supports data classes where all fields are present in the constructor as well as classes with writable properties. Any fields not present in the constructor will be set after the constructor call. The mapper supports nullable types. It also uses default parameter values in the constructor if the parameter type is not nullable and the value absent in the result set.
To use this plugin, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-kotlin</artifactId>
</dependency>
Ensure the Kotlin compiler’s JVM target version is set to at least 1.8:
<kotlin.compiler.jvmTarget>1.8</kotlin.compiler.jvmTarget>
Then install the plugin into your Jdbi
instance:
jdbi.installPlugin(KotlinPlugin());
The Kotlin mapper also supports @ColumnName
annotation that allows to specify
name for a property or parameter explicitly, as well as the @Nested
annotation
that allows mapping nested Kotlin objects.
Instead of using @BindBean , bindBean() , and @RegisterBeanMapper use @BindKotlin , bindKotlin() , and KotlinMapper
for qualifiers on constrictor parameters, getter, setters, and setter parameters of Kotlin class.
|
The @ColumnName annotation only applies while mapping SQL data into Java
objects. When binding object properties (e.g. with bindBean() ), bind the
property name (:id ) rather than the column name (:user_id ).
|
If you load all Jdbi plugins via Jdbi.installPlugins()
this plugin will be
discovered and registered automatically. Otherwise, you can attach it using
Jdbi.installPlugin(KotlinPlugin())
.
An example from the test class:
data class IdAndName(val id: Int, val name: String)
data class Thing(
@Nested val idAndName: IdAndName,
val nullable: String?,
val nullableDefaultedNull: String? = null,
val nullableDefaultedNotNull: String? = "not null",
val defaulted: String = "default value"
)
@Test
fun testFindById() {
val qry = h2Extension.sharedHandle.createQuery("select id, name from something where id = :id")
val things: List<Thing> = qry.bind("id", brian.idAndName.id).mapTo<Thing>().list()
assertEquals(1, things.size)
assertEquals(brian, things[0])
}
There are two extensions to help:
-
<reified T : Any>ResultBearing.mapTo()
-
<T : Any>ResultIterable<T>.useSequence(block: (Sequence<T>) → Unit)
Allowing code like:
val qry = handle.createQuery("select id, name from something where id = :id")
val things = qry.bind("id", brian.id).mapTo<Thing>.list()
and for using a Sequence that is auto closed:
qryAll.mapTo<Thing>.useSequence {
it.forEach(::println)
}
8.9.2. SqlObject
The jdbi3-kotlin-sqlobject plugin adds automatic parameter binding by name for Kotlin methods in SqlObjects as well as support for Kotlin default methods.
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-kotlin-sqlobject</artifactId>
</dependency>
Then install the plugin into your Jdbi
instance:
jdbi.installPlugin(KotlinSqlObjectPlugin());
Parameter binding supports individual primitive types as well as Kotlin or
JavaBean style objects as a parameter (referenced in binding as
:paramName.propertyName
). No annotations are needed.
If you load all Jdbi plugins via Jdbi.installPlugins()
this plugin will be
discovered and registered automatically. Otherwise, you can attach the plugin
via: Jdbi.installPlugin(KotlinSqlObjectPlugin())
.
An example from the test class:
interface ThingDao {
@SqlUpdate("insert into something (id, name) values (:something.idAndName.id, :something.idAndName.name)")
fun insert(something: Thing)
@SqlQuery("select id, name from something")
fun list(): List<Thing>
}
@BeforeEach
fun setUp() {
val dao = h2Extension.jdbi.onDemand<ThingDao>()
val brian = Thing(IdAndName(1, "Brian"), null)
val keith = Thing(IdAndName(2, "Keith"), null)
dao.insert(brian)
dao.insert(keith)
}
@Test
fun testDao() {
val dao = h2Extension.jdbi.onDemand<ThingDao>()
val rs = dao.list()
assertEquals(2, rs.size.toLong())
assertEquals(brian, rs[0])
assertEquals(keith, rs[1])
}
8.9.3. Jackson JSON Processing
Jackson needs a specialized ObjectMapper instance in order to understand deserialization of Kotlin types. Make sure to configure your Jackson plugin:
import com.fasterxml.jackson.module.kotlin.jacksonObjectMapper
jdbi.getConfig(Jackson2Config::class.java).mapper = jacksonObjectMapper()
8.10. Lombok
Lombok is a tool for auto-generation of mutable and immutable data classes through annotations.
@Data
public void DataClass {
private Long id;
private String name;
// autogenerates default constructor, getters, setters, equals, hashCode, and toString
}
@Value
public void ValueClass {
private long id;
private String name;
// autogenerates all-args constructor, getters, equals, hashCode, and toString
}
Classes annotated with @Value
are immutable, classes annotated with @Data
behave like standard, mutable Java Beans.
-
Use
BeanMapper
or@RegisterBeanMapper
to map@Data
classes. -
Use
ConstructorMapper
or@RegisterConstructorMapper
to map@Value
classes. -
Use
bindBean()
or@BindBean
to bind@Data
or@Value
classes.
Lombok must be configured to propagate annotations from the source code into the generated classes. Since version 1.18.4, lombok
supports the lombok.copyableAnnotations
setting in its config file. To support all JDBI features, the following lines must be added:
lombok.copyableAnnotations += org.jdbi.v3.core.mapper.reflect.ColumnName
lombok.copyableAnnotations += org.jdbi.v3.core.mapper.Nested
lombok.copyableAnnotations += org.jdbi.v3.core.mapper.PropagateNull
lombok.copyableAnnotations += org.jdbi.v3.core.annotation.Unmappable
lombok.copyableAnnotations += org.jdbi.v3.core.mapper.reflect.JdbiConstructor
Any additional annotation (such as the @HStore
annotation for PostgreSQL support) can also be added:
lombok.copyableAnnotations += org.jdbi.v3.postgres.HStore
8.11. Oracle 12
This module adds support for Oracle RETURNING
DML expressions.
To use this feature, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-oracle12</artifactId>
</dependency>
Then, use the OracleReturning
class with an Update
or PreparedBatch
to get the returned DML.
8.12. PostgreSQL
The jdbi3-postgres plugin provides enhanced integration with the PostgreSQL JDBC Driver.
To use this feature, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-postgres</artifactId>
</dependency>
Then install the plugin into your Jdbi
instance.
Jdbi jdbi = Jdbi.create("jdbc:postgresql://host:port/database")
.installPlugin(new PostgresPlugin());
The plugin configures mappings for the Java 8 java.time types like Instant or Duration, InetAddress, UUID, typed enums, and hstore.
It also configures SQL array type support for int
, long
, float
, double
,
String
, and UUID
.
See the javadoc for an exhaustive list.
Some Postgres operators, for example the ? query operator, collide
with jdbi or JDBC special characters. In such cases, you may need to
escape operators to e.g. ?? or \: .
|
8.12.1. hstore
The Postgres plugin provides an hstore
to Map<String, String>
column mapper
and vice versa argument factory:
Map<String, String> accountAttributes = handle
.select("select attributes from account where id = ?", userId)
.mapTo(new GenericType<Map<String, String>>() {})
.one();
With @HStore
qualified type:
QualifiedType<> HSTORE_MAP = QualifiedType.of(new GenericType<Map<String, String>>() {})
.with(HStore.class);
Map<String, String> caps = handle.createUpdate("update account set attributes = :hstore")
.bindByType("hstore", mapOfStrings, HSTORE_MAP)
.execute();
By default, SQL Object treats Map
return types as a collection of Map.Entry
values. Use the @SingleValue
annotation to override this, so that the return
type is treated as a single value instead of a collection:
public interface AccountDao {
@SqlQuery("select attributes from account where id = ?")
@SingleValue
Map<String, String> getAccountAttributes(long accountId);
}
The default variant to install the plugin adds unqualified mappings of raw Map types from and to the hstore Postgres data type. There are situations where this interferes with other mappings of maps. It is recommended to always use the variant with the @HStore qualified type.
|
To avoid binding the unqualified Argument and ColumnMapper bindings, install the plugin using the static factory method:
Jdbi jdbi = Jdbi.create("jdbc:postgresql://host:port/database")
.installPlugin(PostgresPlugin.noUnqualifiedHstoreBindings());
8.12.2. @GetGeneratedKeys
In Postgres, @GetGeneratedKeys
can return the entire modified row if you
request generated keys without naming any columns.
public interface UserDao {
@SqlUpdate("insert into users (id, name, created_on) values (nextval('user_seq'), ?, now())")
@GetGeneratedKeys
@RegisterBeanMapper(User.class)
User insert(String name);
}
If a database operation modifies multiple rows (e.g. an update that will modify several rows), your method can return all the modified rows in a collection:
public interface UserDao {
@SqlUpdate("update users set active = false where id = any(?)")
@GetGeneratedKeys
@RegisterBeanMapper(User.class)
List<User> deactivateUsers(long... userIds);
}
8.12.3. Large Objects
Postgres supports storing large character or binary data in separate storage from table data. Jdbi allows you to stream this data in and out of the database as part of an enclosing transaction. Storing, reading, and a delete hook are provided. The test case serves as a simple example:
public void blobCrud(InputStream myStream) throws IOException {
h.useTransaction(th -> {
Lobject lob = th.attach(Lobject.class);
lob.insert(1, myStream);
readItBack = lob.readBlob(1);
lob.deleteLob(1);
assert lob.readBlob(1) == null;
});
}
public interface Lobject {
// CREATE TABLE lob (id int, lob oid
@SqlUpdate("insert into lob (id, lob) values (:id, :blob)")
void insert(int id, InputStream blob);
@SqlQuery("select lob from lob where id = :id")
InputStream readBlob(int id);
@SqlUpdate("delete from lob where id = :id returning lo_unlink(lob)")
void deleteLob(int id);
}
Please refer to Pg-JDBC docs for upstream driver documentation.
8.13. Spring5
This module provides JdbiFactoryBean
, a factory bean which sets up a Jdbi
singleton in a Spring 5 (or Spring Boot) application context.
To use this module, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-spring5</artifactId>
</dependency>
Then configure the Jdbi factory bean in your Spring container, e.g.:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd">
(1)
<bean id="db" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="url" value="jdbc:h2:mem:testing"/>
</bean>
(2)
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="db"/>
</bean>
<tx:annotation-driven transaction-manager="transactionManager"/>
(3)
<bean id="jdbi"
class="org.jdbi.v3.spring5.JdbiFactoryBean">
<property name="dataSource" ref="db"/>
</bean>
(4)
<bean id="service"
class="com.example.service.MyService">
<constructor-arg ref="jdbi"/>
</bean>
</beans>
1 | The SQL data source that Jdbi will connect to. In this example we use an H2 database, but it can be any JDBC-compatible database. |
2 | Enable configuration of transactions via annotations. |
3 | Configure JdbiFactoryBean using the data source configured earlier. |
4 | Inject Jdbi into a service class. Alternatively, use standard JSR-330
@Inject annotations on the target class instead of configuring it in
your beans.xml . |
8.13.1. Installing plugins
Plugins may be automatically installed by scanning the classpath for ServiceLoader manifests.
<bean id="jdbi" class="org.jdbi.v3.spring5.JdbiFactoryBean">
...
<property name="autoInstallPlugins" value="true"/>
</bean>
Plugins may also be installed explicitly:
<bean id="jdbi" class="org.jdbi.v3.spring5.JdbiFactoryBean">
...
<property name="plugins">
<list>
<bean class="org.jdbi.v3.sqlobject.SqlObjectPlugin"/>
<bean class="org.jdbi.v3.guava.GuavaPlugin"/>
</list>
</property>
</bean>
Not all plugins are automatically installable. In these situations, you can auto-install some plugins and manually install the rest:
<bean id="jdbi" class="org.jdbi.v3.spring5.JdbiFactoryBean">
...
<property name="autoInstallPlugins" value="true"/>
<property name="plugins">
<list>
<bean class="org.jdbi.v3.core.h2.H2DatabasePlugin"/>
</list>
</property>
</bean>
8.13.2. Global Attributes
Global defined attributes may be configured on the factory bean:
<bean id="jdbi" class="org.jdbi.v3.spring5.JdbiFactoryBean">
<property name="dataSource" ref="db"/>
<property name="globalDefines">
<map>
<entry key="foo" value="bar"/>
</map>
</property>
</bean>
8.14. SQLite
The jdbi3-sqlite plugin provides support for using the SQLite JDBC Driver with Jdbi.
The plugin configures mapping for the Java URL type which is not supported by driver.
To use this plugin, first add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-sqlite</artifactId>
</dependency>
Then install the plugin into your Jdbi
instance.
Jdbi jdbi = Jdbi.create("jdbc:sqlite:database")
.installPlugin(new SQLitePlugin());
8.15. StringTemplate 4
This module allows you to plug in the StringTemplate 4 templating engine, in place of the standard Jdbi templating engine.
To use module plugin, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-stringtemplate4</artifactId>
</dependency>
To use StringTemplate format in SQL statements, set the template engine
to StringTemplateEngine
.
Defined attributes are provided to the StringTemplate engine to render the SQL:
String sortColumn = "name";
String sql = "select id, name " +
"from account " +
"order by <if(sort)> <sortBy>, <endif> id";
List<Account> accounts = handle.createQuery(sql)
.setTemplateEngine(new StringTemplateEngine())
.define("sort", true)
.define("sortBy", sortColumn)
.mapTo(Account.class)
.list();
Since StringTemplate by default uses the < character to mark ST expressions,
you might need to escape some SQL: String datePredSql = "<if(datePredicate)> <dateColumn> \\< :dateFilter <endif>"
|
Alternatively, SQL templates can be loaded from StringTemplate group files on the classpath:
group AccountDao;
selectAll(sort,sortBy) ::= <<
select id, name
from account
order by <if(sort)> <sortBy>, <endif> id
>>
ST template = StringTemplateSqlLocator.findStringTemplate(
"com/foo/AccountDao.sql.stg", "selectAll");
String sql = template.add("sort", true)
.add("sortBy", sortColumn)
.render();
In SQL Objects, the @UseStringTemplateEngine
annotation sets the
statement locator, similar to first example above.
package com.foo;
public interface AccountDao {
@SqlQuery("select id, name " +
"from account " +
"order by <if(sort)> <sortBy>, <endif> id")
@UseStringTemplateEngine
List<Account> selectAll(@Define boolean sort,
@Define String sortBy);
}
Alternatively, the @UseStringTemplateSqlLocator
annotation sets the statement
locator, and loads SQL from a StringTemplate group file on the classpath:
package com.foo;
public interface AccountDao {
@SqlQuery
@UseStringTemplateSqlLocator
List<Account> selectAll(@Define boolean sort,
@Define String sortBy);
}
In this example, since the fully qualified class name is com.foo.AccountDao
,
SQL will be loaded from the file com/foo/AccountDao.sql.stg
on the
classpath.
By default, the template in the group with the same name as the method will be
used. This can be overridden on the @Sql___
annotation:
package com.foo;
public interface AccountDao {
@SqlQuery("listSorted")
@UseStringTemplateSqlLocator
List<Account> selectAll(@Define boolean sort,
@Define String sortBy);
}
In this example, the SQL template will still be loaded from the file
com/foo/AccountDao.sql.stg
on the classpath, however the listSorted
template will be used, regardless of the method name.
8.16. Vavr
The Vavr Plugin offers deep integration of Jdbi with the Vavr functional library:
-
Supports argument resolution of sever Vavr Value types such as
Option<T>
,Either<L, T>
,Lazy<T>
,Try<T>
andValidation<T>
. Given that for the wrapped typeT
a Mapper is registered. -
Return Vavr collection types from queries. Supported are
Seq<T>
,Set<T>
,Map<K, T>
andMultimap<K, T>
as well as all subtypes thereof. It is possible to collect into aTraversable<T>
, in this case aList<T>
will be returned. For all interface types a sensible default implementation will be used (e.q.List<T>
forSeq<T>
). FurthermoreMultimap<K, T>
s are backed by aSeq<T>
as default value container. -
Columns can be mapped into Vavr’s
Option<T>
type. -
Tuple projections for Jdbi! Yey! Vavr offers Tuples up to a maximum arity of 8. you can map your query results e.g. to
Tuple3<Integer, String, Long>
. If you select more columns than the arity of the projection the columns up to that index will be used.
To use the plugin, add a Maven dependency:
<dependency>
<groupId>org.jdbi</groupId>
<artifactId>jdbi3-vavr</artifactId>
</dependency>
Currently Vavr >= 0.9.0 is supported and tested. The plugin pulls a supported version of
Vavr and is ready to be used. As with other plugins: install via Jdbi
instance or
use auto install.
jdbi.installPlugin(new VavrPlugin());
Here are some usage examples of the features listed above:
String query = "select * from users where :name is null or name = :name";
Option<String> param = Option.of("eric");
// will fetch first user with given name or first user with any name (Option.none)
return handle.createQuery(query)
.bind("name", param)
.mapToBean(User.class)
.findFirst();
where param
may be one of Option<T>
, Either<L, T>
, Lazy<T>
, Try<T>
or Validation<T>
. Note that in the case of these types, the nested value must
be 'present' otherwise a null
value is used (e.g. for Either.Left
or
Validation.Invalid
).
handle.createQuery("select name from users")
.collectInto(new GenericType<Seq<String>>() {});
This works for all the collection types supported. For the nested value row and column mappers already installed in Jdbi will be used. Therefore the following would work and can make sense if the column is nullable:
handle.createQuery("select middle_name from users") // nulls incoming!
.collectInto(new GenericType<Seq<Option<String>>>() {});
The plugin will obey configured key and value columns for Map<K, T>
and Multimap<K, T>
return types. In the next example we will key users
by their name, which is not necessarily unique.
Multimap<String, User> usersByName = handle.createQuery("select * from users")
.setMapKeyColumn("name")
.collectInto(new GenericType<Multimap<String, User>>() {});
Last but not least we can now project simple queries to Vavr tuples like that:
// given a 'tuples' table with t1 int, t2 varchar, t3 varchar, ...
List<Tuple3<Integer, String, String>> tupleProjection = handle
.createQuery("select t1, t2, t3 from tuples")
.mapTo(new GenericType<Tuple3<Integer, String, String>>() {})
.list();
You can also project complex types into a tuple as long as a row mapper is registered.
// given that there are row mappers registered for both complex types
Tuple2<City, Address> tupleProjection = handle
.createQuery("select cityname, zipcode, street, housenumber from " +
"addresses where user_id = 1")
.mapTo(new GenericType<Tuple2<City, Address>>() {})
.one();
If you want to mix complex types and simple ones we also got you covered.
Using the TupleMappers
class you can configure your projections.(In fact,
you have to - read below!)
handle.configure(TupleMappers.class, c ->
c.setColumn(2, "street").setColumn(3, "housenumber"));
Tuple3<City, String, Integer> result = handle
.createQuery("select cityname, zipcode, street, housenumber from " +
"addresses where user_id = 1")
.mapTo(new GenericType<Tuple3<City, String, Integer>>() {})
.one();
Bear in mind:
-
The configuration of the columns is 1-based, since they reflect the tuples' values (which you would query by e.g.
._1
). -
Tuples are always mapped fully column-wise or fully via row mappers. If you want to mix row-mapped types and single-column mappings the
TupleMappers
must be configured properly i.e. all non row-mapped tuple indices must be provided with a column configuration!
9. Cookbook
This section includes examples of various things you might like to do with Jdbi
.
9.1. Simple Dependency Injection
Jdbi
tries to be independent of using a dependency injection framework,
but it’s straightforward to integrate yours. Just do field injection on a simple custom config type:
class InjectedDependencies implements JdbiConfig<InjectedDependencies> {
@Inject
SomeDependency dep;
public InjectedDependencies() {}
@Override
public InjectedDependencies createCopy() {
return this; // effectively immutable
}
}
Jdbi jdbi = Jdbi.create(myDataSource);
myIoC.inject(jdbi.getConfig(InjectedDependencies.class));
// Then, in any component that needs to access it:
getHandle().getConfig(InjectedDependencies.class).dep
9.2. LIKE clauses with Parameters
Since JDBC (and therefore Jdbi
) does not allow binding parameters into the middle of string literals,
you cannot interpolate bindings into LIKE
clauses (LIKE '%:param%'
).
Incorrect usage:
handle.createQuery("select name from things where name like '%:search%'")
.bind("search", "foo")
.mapTo(String.class)
.list()
This query would try to select where name like '%:search%'
literally, without binding any arguments.
This is because JDBC drivers will not bind arguments inside string literals.
It never gets that far, though — this query will throw an exception, because we don’t allow unused argument bindings by default.
The solution is to use SQL string concatenation:
handle.createQuery("select name from things where name like '%' || :search || '%'")
.bind("search", "foo")
.mapTo(String.class)
.list()
Now, search
can be properly bound as a parameter to the statement, and it all works as desired.
Check the string concatenation syntax of your database before doing this. |
10. Advanced Topics
10.1. High Availability
Jdbi can be combined with connection pools and high-availability features in your database driver. We’ve used HikariCP in combination with the PgJDBC connection load balancing features with good success.
PGSimpleDataSource ds = new PGSimpleDataSource();
ds.setServerName("host1,host2,host3");
ds.setLoadBalanceHosts(true);
HikariConfig hc = new HikariConfig();
hc.setDataSource(ds);
hc.setMaximumPoolSize(6);
Jdbi jdbi = Jdbi.create(new HikariDataSource(hc)).installPlugin(new PostgresPlugin());
Each Jdbi may be backed by a pool of any number of hosts, but the connections should all be alike. Exactly which parameters must stay the same and which may vary depends on your database and driver.
If you want to have two separate pools, for example a read-only set that connects to read replicas
and a smaller pool of writers that go only to a single host, you currently should have separate
Jdbi
instances each pointed at a separate DataSource
.
10.2. Compiling with Parameter Names
By default, the Java compiler does not write parameter names of constructors and methods to class files. At runtime, reflectively asking for parameter names gives values like "arg0", "arg1", etc.
Out of the box, Jdbi uses annotations to know what each parameter is called, e.g.:
-
ConstructorMapper
uses the@ConstructorProperties
annotation. -
SQL Object method arguments use the
@Bind
annotation.
@SqlUpdate("insert into users (id, name) values (:id, :name)")
void insert(@Bind("id") long id, @Bind("name") String name); (1)
1 | Such verbose, very boilerplate. Wow. |
If you compile your code with the -parameters
compiler flag, then the need for
these annotations is removed—Jdbi automatically uses the method parameter name:
@SqlUpdate("insert into users (id, name) values (:id, :name)")
void insert(long id, String name);
10.2.1. Maven setup
Configure the maven-compiler-plugin
in your POM:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<compilerArgs>
<arg>-parameters</arg>
</compilerArgs>
</configuration>
</plugin>
10.2.2. IntelliJ IDEA setup
-
File → Settings
-
Build, Execution, Deployment → Compiler → Java Compiler
-
Additional command-line parameters:
-parameters
-
Click Apply, then OK.
-
Build → Rebuild Project
10.2.3. Eclipse setup
-
Window → Preferences
-
Java → Compiler
-
Under "Classfile Generation," check the option "Store information about method parameters (usable via reflection)."
10.3. Working with Generic Types
Jdbi provides utility classes to make it easier to work with Java generic types.
10.3.1. GenericType
GenericType represents a generic type signature that can be passed around in a type-safe way.
Create a generic type reference by instantiating an anonymous inner class:
new GenericType<Optional<String>>() {}
This type reference can be passed to any Jdbi method that accepts a
GenericType<T>
, e.g.:
List<Optional<String>> middleNames = handle
.select("select middle_name from contacts")
.mapTo(new GenericType<Optional<String>>() {})
.list();
The GenericType.getType()
returns the raw
java.lang.reflect.Type object used
to represent generics in Java.
10.3.2. GenericTypes
GenericTypes provides methods for working with Java generic types signatures.
All methods in GenericTypes
operate in terms of java.lang.reflect.Type
.
The getErasedType(Type)
method accepts a Type
and returns the raw Class
for that type, with any generic parameters erased:
Type listOfInteger = new GenericType<List<Integer>>() {}.getType();
GenericTypes.getErasedType(listOfInteger); // => List.class
GenericTypes.getErasedType(String.class); // => String.class
The resolveType(Type, Type)
method takes a generic type, and a context type in
which to resolve it.
For example, given the type variable T
from Optional<T>
:
Type t = Optional.class.getTypeParameters()[0];
And given the context type Optional<String>
:
Type optionalOfString = new GenericType<Optional<String>>() {}.getType();
The resolveType()
method answers the question: "what is type T, in the context
of type Optional<String>?"
GenericTypes.resolveType(t, optionalOfString);
// => String.class
This scenario of resolving the first type parameter of some generic supertype is so common that we made a separate method for it:
GenericTypes.findGenericParameter(optionalOfString, Optional.class);
// => Optional.of(String.class)
Type listOfInteger = new GenericType<List<Integer>>() {}.getType();
GenericTypes.findGenericParameter(listOfInteger, Collection.class);
// => Optional.of(Integer.class)
Note that this method will return Optional.empty()
if the type parameter
cannot be resolved, or the types have nothing to do with each other:
GenericTypes.findGenericParameter(optionalOfString, List.class);
// => Optional.empty();
10.4. NamedArgumentFinder
The NamedArgumentFinder
interface, as its name suggests, finds arguments by name from some source.
Typically a single NamedArgumentFinder
instance will provide arguments for
several different names.
In cases where neither bindBean()
, bindFields()
, bindMethods()
, nor
bindMap()
are a good fit, you can implement your own NamedArgumentFinder
and
bind that, instead of extracting and binding each argument individually.
Cache cache = ... // e.g. Guava Cache
NamedArgumentFinder cacheFinder = (name, ctx) ->
Optional.ofNullable(cache.getIfPresent(name))
.map(value -> ctx.findArgumentFor(Object.class, value));
stmt.bindNamedArgumentFinder(cacheFinder);
Under the hood, the
SqlStatement.bindBean(),
SqlStatement.bindMethods(),
SqlStatement.bindFields(),
and
SqlStatement.bindMap()
methods are just creating and binding custom implementations of
NamedArgumentFinder for beans, methods, fields, and maps, respectively.
|
10.5. JdbiConfig
Configuration is managed by the ConfigRegistry class. Each Jdbi object that represents a distinct database context (for example, Jdbi itself, a Handle instance, or an attached SqlObject class) gets its own configuration registry. Most contexts implement the Configurable interface which allows modification of its configuration as well as retrieving the current context’s configuration for use by Jdbi core or extensions.
When a new configurable context is created, it inherits a copy of its parent configuration at the time of creation - further modifications to the original will not affect already created configuration contexts. Configuration context copies happen when producing a Handle from Jdbi, when opening a SqlStatement from the Handle, and when attaching or creating an on-demand extension such as SqlObject.
The configuration itself is stored in various implementations of the JdbiConfig interface. Each implementation must adhere to the contract of the interface; in particular it must have a public no-argument constructor that provides useful defaults and a createCopy method that is invoked when a configuration registry is cloned.
Generally, configuration should be set on a context before that context is used, and not changed later. Some configuration classes may be thread safe but most are not.
Many of Jdbi’s core features, for example argument or mapper registries, are simply implementations of JdbiConfig that store the registered mappings for later use during query execution.
public class ExampleConfig implements JdbiConfig<ExampleConfig> {
private String color;
private int number;
public ExampleConfig() {
color = "purple";
number = 42;
}
private ExampleConfig(ExampleConfig other) {
this.color = other.color;
this.number = other.number;
}
public ExampleConfig setColor(String color) {
this.color = color;
return this;
}
public String getColor() {
return color;
}
public ExampleConfig setNumber(int number) {
this.number = number;
return this;
}
public int getNumber() {
return number;
}
@Override
public ExampleConfig createCopy() {
return new ExampleConfig(this);
}
}
10.5.1. Creating a custom JdbiConfig type
-
Create a public class that implements JdbiConfig.
-
Add a public, no-argument constructor
-
Add a private, copy constructor.
-
Implement
createCopy()
to call the copy constructor. -
Add config properties, and provide sane defaults for each property.
-
Ensure that all config properties get copied to the new instance in the copy constructor.
-
Override
setConfig(ConfigRegistry)
if your config class wants to be able to use other config classes in the registry. E.g. RowMappers registry delegates to ColumnMappers registry, if it doesn’t have a mapper registered for a given type. -
Use that configuration object from other classes that are interested in it.
-
e.g. BeanMapper, FieldMapper, and ConstructorMapper all use the ReflectionMappers config class to keep common configuration.
-
10.6. JdbiPlugin
JdbiPlugin can be used to bundle bulk configuration.
Plugins may be installed explicitly via Jdbi.installPlugin(JdbiPlugin)
, or
may be installed automagically from the classpath using the ServiceLoader mechanism
via installPlugins()
.
Jars may provide a file in META-INF/services/org.jdbi.v3.core.spi.JdbiPlugin
containing the fully qualified class name of your plugin.
In general, Jdbi’s separate artifacts each provide a single relevant plugin (e.g. jdbi3-sqlite
),
and such modules will be auto-loadable. Modules that provide no (e.g. jdbi3-commons-text
)
or multiple (e.g. jdbi3-core
) plugins typically will not be.
The developers encourage you to install plugins explicitly. Code with declared dependencies on the module it uses is more robust to refactoring and provides useful data for static analysis tools about what code is or isn’t used. |
10.7. StatementContext
The StatementContext class is a carrier for various state related to the creation and execution of statements that is not appropriate to hold on the Query or other particular statement class itself. Among other things, it holds open JDBC resources, processed SQL statements, and accumulated bindings. It is exposed to implementations of most user extension points, for example RowMapper, *ColumnMapper*s, or *CollectorFactory.
The StatementContext itself is not intended to be extended and generally extensions should not need to mutate the context. Please read the JavaDoc for more information on advanced usage.
10.8. User-Defined Annotations
SQL Object is designed to be extended with user-defined annotations. In fact, most of the annotations provided in Jdbi are wired up with the approach outlined below.
There are a few different categories of annotations in SQL Object, and it’s important to understand the differences between them:
-
Statement Customizing Annotations - configures the underlying SqlStatement of a method prior to execution. These can only be used in tandem with annotations like
@SqlQuery
,@SqlUpdate
, etc, and do not work on default methods. -
Configuration Annotations - modifies configuration in the ConfigRegistry within the scope of a SQL object or one of its methods.
-
Method Decorating Annotations - decorates a method invocation with some additional behavior, e.g. the
@Transaction
annotation wraps the method call in ahandle.inTransaction()
call.
Once you know which type of annotation you want, proceed to the appropriate section below and follow the guide to set it up.
10.8.1. Statement Customizing Annotations
SQL statement customizing annotations are used to apply some change to the SqlStatement associated with a SQL method.
Typically these annotations correlate to an API method in core. e.g. @Bind
corresponds to SqlStatement.bind()
, @MaxRows
corresponds to
Query.setMaxRows()
, etc.
Customizing annotations are applied only after the SqlStatement has been created.
You can create your own SQL statement customizing annotations and attach runtime behavior to them.
First, create an annotation that you want to attach a statement customization to:
@Retention(RetentionPolicy.RUNTIME) (1)
@Target({ElementType.TYPE, ElementType.METHOD, ElementType.PARAMETER}) (2)
public @interface MaxRows {
int value();
}
1 | All statement customizing annotations should have a RUNTIME retention
policy. |
2 | Statement customizing annotations only work on types, methods, or
parameters. Strictly speaking, the @Target annotation is not required, but
it’s a good practice to include it, so that annotations can only be applied
where they will actually do something. |
Placing a customizing annotation on a type means "apply this customization to every method."
When used on parameters, annotations may use the argument passed to the method while processing the annotation.
Next, we write an implementation of the SqlStatementCustomizerFactory class, to process the annotation and apply the customization to the statement.
The SqlStatementCustomizerFactory
produces two different types of "statement
customizer" command objects:
SqlStatementCustomizer
(for annotations on types or methods), and
SqlStatementParameterCustomizer
(for annotations on method parameters).
Let’s implement a statement customizer factory for our annotation:
public class MaxRowsFactory implements SqlStatementCustomizerFactory {
@Override
public SqlStatementCustomizer createForType(Annotation annotation,
Class<?> sqlObjectType) {
final int maxRows = ((MaxRows)annotation).value(); (1)
return stmt -> ((Query)stmt).setMaxRows(maxRows); (2)
}
@Override
public SqlStatementCustomizer createForMethod(Annotation annotation,
Class<?> sqlObjectType,
Method method) {
return createForType(annotation, sqlObjectType); (3)
}
@Override
public SqlStatementParameterCustomizer createForParameter(Annotation annotation,
Class<?> sqlObjectType,
Method method,
Parameter param,
int index,
Type type) {
return (stmt, maxRows) -> ((Query)stmt).setMaxRows((Integer) maxRows); (4)
}
}
1 | Extract the max rows from the annotation |
2 | SqlStatementCustomizer can be implemented as a lambda—it receives a SqlStatement as a parameter, calls whatever method it wants on the statement, and returns void. |
3 | Since the customization for this annotation is the same at the method level as at the type level, we simply delegate to the type-level method for brevity. |
4 | SqlStatementParameterCustomizer
can also be implemented as a lambda. It accepts a SqlStatement and the
value that was passed into the method on the annotated parameter. |
Finally, add the
@SqlStatementCustomizingAnnotation
annotation the @MaxRows
annotation type. This tells Jdbi that MaxRowsFactory
implements the behavior of the @MaxRows
annotation:
@SqlStatementCustomizingAnnotation(MaxRowsFactory.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD, ElementType.PARAMETER})
public @interface MaxRows {
int value() default -1;
}
Your statement customizing annotation is now ready to use on any SQL object:
public interface Dao {
@SqlQuery("select * from contacts")
@MaxRows(100)
List<Contact> list();
@SqlQuery("select * from contacts")
List<Contact> list(@MaxRows int maxRows);
}
We chose @MaxRows as an example here because it was easy to understand.
In practice, you will get better database performance by using a LIMIT clause
in your SQL statement than by using @MaxRows .
|
10.8.2. Configuration Annotations
Configuration annotations are used to apply some change to the ConfigRegistry associated with a SQL object or method.
Typically these annotations correlate to a method of
Configurable (Jdbi
,
Handle
, and SqlStatement
all implement this interface). For example,
@RegisterColumnMapper
correlates to Configurable.registerColumnMapper()
.
You can create your own configuration annotations, and attach runtime behavior to them:
-
Write a new configuration annotation, with any attributes you need.
-
Write an implementation of Configurer which performs the configuration associated with your annotation.
-
Add the
@ConfiguringAnnotation
annotation to your configuration annotation type.
With the above steps completed, Jdbi will invoke your configurer whenever it encounters the associated annotation.
Let’s re-implement one of Jdbi’s built-in annotations as an example:
The @RegisterColumnMapper
annotation has an attribute to specify the class of
the column mapper to register. Wherever the annotation is used, we want Jdbi to
create an instance of that mapper type, and register it with the config
registry.
First, let’s create the new annotation type:
@Retention(RetentionPolicy.RUNTIME) (1)
@Target({ElementType.TYPE, ElementType.METHOD}) (2)
public @interface RegisterColumnMapper{
Class<? extends ColumnMapper<?>> value();
}
1 | All configuration annotations should have a RUNTIME retention policy. |
2 | Configuration annotations only work on types and methods. Strictly speaking,
the @Target annotation is not required, but it’s a good practice to
include it, so that annotations can only be applied where they will actually
do something. |
Placing a configuration annotation on a type means "apply this configuration to every method."
Next, we write an implementation of the Configurer class, to process the annotation and apply the configuration:
public class RegisterColumnMapperImpl implements Configurer {
@Override
public void configureForMethod(ConfigRegistry registry,
Annotation annotation,
Class<?> sqlObjectType,
Method method) {
configure(registry, (RegisterColumnMapper) annotation);
}
@Override
public void configureForType(ConfigRegistry registry,
Annotation annotation,
Class<?> sqlObjectType) {
configure(registry, (RegisterColumnMapper) annotation);
}
private void configure(ConfigRegistry registry,
RegisterColumnMapper registerColumnMapper) { (1)
try {
Class<? extends ColumnMapper> mapperType = registerColumnMapper.value();
ColumnMapper mapper = mapperType.getConstructor().newInstance();
registry.get(ColumnMappers.class).register(mapper);
}
catch (NoSuchMethodException e) {
throw new RuntimeException("Cannot construct " + mapperType, e);
}
}
}
1 | In this example, we’re applying the same configuration, whether the
@RegisterColumnMapper annotation is used on the SQL object type or method.
However this is not a requirement—some annotations may choose to apply
configuration differently depending on whether the annotation is placed on
the type or the method. |
For configuration annotations with only one target, (e.g. @KeyColumn
and
@ValueColumn
may only be applied to methods), you need only implement
the Configurer
method appropriate for the annotation target.
Finally, add the
@ConfiguringAnnotation
annotation to your @RegisterColumnMapper
annotation type. This tells Jdbi that
RegisterColumnMapperImpl
implements the behavior of the
@RegisterColumnMapper
annotation.
@ConfiguringAnnotation(RegisterColumnMapperImpl.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
public @interface RegisterColumnMapper {
Class<? extends TemplateEngine> value();
}
Your configuration annotation is now ready to use in any SQL object:
public interface AccountDao {
@SqlQuery("select balance from accounts where id = ?")
@RegisterColumnMapper(MoneyMapper.class)
public Money getBalance(long accountId);
}
10.8.3. Method Decorating Annotations
Method decorating annotations are used to enhance a SQL Object method with additional (or substitute) behavior.
Internally, SQL Object represents the behavior of each method with an instance of the Handler interface. Every time you call a method on a SQL Object instance, the method is executed by executing the handler for the given method.
When you use a decorating annotation (like @Transaction
), the regular handler
for a method is wrapped in another handler which may perform some action before
and/or after passing the call to the original handler.
A decorator could even perform some action instead of calling the original, e.g. for a caching annotation.
Let’s re-implement the @Transaction
annotation to see how it works:
First, create the annotation type:
@Retention(RetentionPolicy.RUNTIME) (1)
@Target(ElementType.METHOD) (2)
public @interface Transaction {
TransactionIsolationLevel value();
}
1 | All decorating annotations should have a RUNTIME retention policy. |
2 | Decorating annotations only work on types and methods. Strictly speaking,
the @Target annotation is not required, but it’s a good practice to
include it, so that annotations can only be applied where they will actually
do something. |
Placing a decorating annotation on a type means "apply this decoration to every method."
Next we write an implementation of the HandlerDecorator interface, to process the annotation and apply the decoration:
public class TransactionDecorator implements HandlerDecorator {
public Handler decorateHandler(Handler base,
Class<?> sqlObjectType,
Method method) {
Transaction anno = method.getAnnotation(Transaction.class); (1)
TransactionIsolationLevel isolation = anno.value(); (2)
return (target, args, handleSupplier) -> handleSupplier.getHandle() (3)
.inTransaction(isolation, h -> base.invoke(target, args, handleSupplier));
}
}
1 | Get the @Transaction annotation |
2 | Extract the transaction isolation level from the annotation |
3 | The Handler interface accepts a target (the SQL Object instance being
invoked), an Object[] array of arguments passed to the method, and a
HandleSupplier . |
Finally, add the
@SqlMethodDecoratingAnnotation
annotation to your @Transaction
annotation type. This tells Jdbi that
TransactionDecorator
implements the behavior of the @Transaction
annotation.
@SqlMethodDecoratingAnnotation(TransactionDecorator.class)
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Transaction {
TransactionIsolationLevel value();
}
Your decorating annotation is now ready to use in any SQL object:
public interface ContactDao {
@SqlBatch("insert into contacts (id, name) values (:id, :name)")
@Transaction
void batchInsert(@BindBean Contact... contacts);
}
Decorator Order
If a SQL object method has two or more decorating annotations applied, and the
order of decorations is important, use the @DecoratorOrder
annotation. If no order
is declared, type decorators apply first, then method decorators, but the order is
not further specified.
For example, suppose a method were annotated both with @Cached
and
@Transaction
(just go with it..). We would probably want the @Cached
annotation to go first, so that transactions are not created unnecessarily when
the cache already contains the entry.
public interface ContactDao {
@SqlQuery("select * from contacts where id = ?")
@Cached
@Transaction
@DecoratorOrder(Cached.class, Transaction.class)
Contact getById(long id);
}
Decorator order is expressed from outermost to innermost.
10.9. TemplateEngine
Jdbi uses a TemplateEngine
implementation to render templates into SQL. Template engines take a SQL
template string and the StatementContext
as input, and produce a parseable
SQL string as output.
Out of the box, Jdbi is configured to use DefinedAttributeTemplateEngine
,
which replaces angle-bracked tokens like <name>
in your SQL statements with
the string value of the named attribute:
String tableName = "customers";
Class<?> entityClass = Customer.class;
handle.createQuery("select <columns> from <table>")
.define("table", "customers")
.defineList("columns", "id", "name")
.mapToMap()
.list() // => "select id, name from customers"
The defineList method defines a list of elements as the comma-separated
splice of String values of the individual elements. In the above example,
the columns attribute is defined as "id, name" .
|
Any custom template engine can be used. Simply implement the TemplateEngine
interface, then call setTemplateEngine()
on the Jdbi
, Handle
, or on a SQL
statement like Update
or Query
:
TemplateEngine templateEngine = (template, ctx) -> {
...
};
jdbi.setTemplateEngine(templateEngine);
Jdbi also provides StringTemplateEngine ,
which renders templates using the StringTemplate library. See
StringTemplate 4.
|
10.10. SqlParser
After the SQL template has been rendered, Jdbi uses a
SqlParser to parse out any named
parameters from the SQL statement. This Produces a ParsedSql
object, which
contains all the information Jdbi needs to bind parameters and execute your SQL
statement.
Out of the box, Jdbi is configured to use ColonPrefixSqlParser
, which
recognizes colon-prefixed named parameters, e.g. :name
.
handle.createUpdate("insert into characters (id, name) values (:id, :name)")
.bind("id", 1)
.bind("name", "Dolores Abernathy")
.execute();
Jdbi also provides HashPrefixSqlParser
, which recognizes hash-prefixed
parameters, e.g. #hashtag
. Use this parser by calling setSqlParser()
on the
Jdbi
, Handle
, or any SQL statement such as Query
or Update
.
handle.setSqlParser(new HashPrefixSqlParser());
handle.createUpdate("insert into characters (id, name) values (#id, #name)")
.bind("id", 2)
.bind("name", "Teddy Flood")
.execute();
The default parsers recognize any Java identifier as a parameter or attribute name. Even some strange cases like emoji are allowed, although the Jdbi authors encourage appropriate discretion 🧐. |
The default parsers try to ignore parameter-like constructions inside of string literals, since JDBC drivers wouldn’t let you bind parameters there anyway. |
For you fearless adventurers who have read the
Dragon book,
any custom SQL parser can be used. Simply implement the SqlParser
interface,
then set it on the Jdbi, Handle, or SQL statement:
SqlParser parser = (sql, ctx) -> {
...
};
jdbi.setParser(parser);
10.11. SqlLogger
The SqlLogger interface
is called before and after executing each statement,
and given the current StatementContext
, to log any relevant information desired:
mainly the query in various compilation stages,
attributes and bindings, and important timestamps.
There’s a simple Slf4JSqlLogger implementation that logs all executed statements for debugging.
10.12. ResultProducer
A ResultProducer takes a lazily supplied PreparedStatement and produces a result. The most common producer path, execute(), retrieves the ResultSet over the query results and then uses a ResultSetScanner or higher level mapper to produce results.
An example alternate use is to just return the number of rows modified, as in an UPDATE or INSERT statement:
public static ResultProducer<Integer> returningUpdateCount() {
return (statementSupplier, ctx) -> {
try {
return statementSupplier.get().getUpdateCount();
} finally {
ctx.close();
}
};
}
If you acquire the lazy statement, you are responsible for ensuring that the context is closed eventually to release database resources.
Most users will not need to implement the ResultProducer interface.
10.13. Generator
Jdbi includes an experimental SqlObject code generator. If you include the
jdbi3-generator
artifact as an annotation processor and annotate your SqlObject
definitions with @GenerateSqlObject
, the generator will produce an implementing
class and avoid using Proxy
instances. This may be useful for graal-native
compilation.
11. Appendix
11.1. Best Practices
-
Test your SQL Objects (DAOs) against real databases when possible. Jdbi tries to be defensive and fail eagerly when you hold it wrong.
-
Use the
-parameters
compiler flag to avoid all those@Bind("foo") String foo
redundant qualifiers in SQL Object method parameters. See Compiling with Parameter Names. -
Use a profiler! The true root cause of performance problems can often be a surprise. Measure first, then tune for performance. And then measure again to be sure it made a difference.
-
Don’t forget to bring a towel!
11.3. Related Projects
Embedded Postgres makes testing against a real database quick and easy.
dropwizard-jdbi3 provides integration with DropWizard.
metrics-jdbi3 instruments using DropWizard-Metrics to emit statement timing statistics.
Do you know of a project related to Jdbi? Send us an issue and we’ll add a link here!
11.4. Contributing
jdbi uses GitHub for collaboration. Please check out the project page for more information.
If you have a question, we have a Google Group mailing list
Users sometimes hang out on IRC in #jdbi on Freenode.
11.5. Upgrading from v2 to v3
Already using Jdbi v2?
Here’s a quick summary of differences to help you upgrade:
General:
-
Maven artifacts renamed and split out:
-
Old:
org.jdbi:jdbi
-
New:
org.jdbi:jdbi3-core
,org.jdbi:jdbi3-sqlobject
, etc. -
Root package renamed:
org.skife.jdbi.v2
→org.jdbi.v3
Core API:
-
DBI
,IDBI
→Jdbi
-
Instantiate with
Jdbi.create()
factory methods instead of constructors.
-
-
DBIException
→JdbiException
-
Handle.select(String, …)
now returns aQuery
for further method chaining, instead of aList<Map<String, Object>>
. CallHandle.select(sql, …).mapToMap().list()
for the same effect as v2. -
Handle.insert()
andHandle.update()
have been coalesced intoHandle.execute()
. -
ArgumentFactory
is no longer generic. -
AbstractArgumentFactory
is a generic implementation ofArgumentFactory
for factories that handle a single argument type. -
Argument and mapper factories now operate in terms of
java.lang.reflect.Type
instead ofjava.lang.Class
. This allows Jdbi to handle arguments and mappers for generic types. -
Argument and mapper factories now have a single
build()
method that returns anOptional
, instead of separateaccepts()
andbuild()
methods. -
ResultSetMapper
→RowMapper
. The row index parameter was also removed fromRowMapper
--the current row number can be retrieved directly from theResultSet
. -
ResultColumnMapper
→ColumnMapper
-
ResultSetMapperFactory
→RowMapperFactory
-
ResultColumnMapperFactory
→ColumnMapperFactory
-
Query
no longer maps toMap<String, Object>
by default. CallQuery.mapToMap()
,.mapToBean(type)
,.map(mapper)
or.mapTo(type)
. -
ResultBearing<T>
was refactored intoResultBearing
(no generic parameter) andResultIterable<T>
. Call.mapTo(type)
to get aResultIterable<T>
. -
TransactionConsumer
andTransactionCallback
only take aHandle
now—theTransactionStatus
argument is removed. Just rollback the handle now. -
TransactionStatus
class removed. -
CallbackFailedException
class removed. The functional interfaces likeHandleConsumer
,HandleCallback
,TransactionCallback
, etc can now throw any exception type. Methods likeJdbi.inTransaction
that take these callbacks use exception transparency to throw only the exception thrown by the callback. If your callback throws no checked exceptions, you don’t need a try/catch block. -
StatementLocator
interface removed from core. All core statements expect to receive the actual SQL string now. A similar concept,SqlLocator
was added but is specific to SQL Object. -
StatementRewriter
refactored intoTemplateEngine
, andSqlParser
. -
StringTemplate no longer required to process
<name>
-style tokens in SQL. -
Custom SqlParser implementations must now provide a way to transform raw parameter names to names that will be properly parsed out as named params.
SQL Object API:
-
SQL Object support is not installed by default. It must be added as a separate dependency, and the plugin installed into the
Jdbi
object:
Jdbi jdbi = Jdbi.create(...);
jdbi.installPlugin(new SqlObjectPlugin());
-
SQL Object types in v3 must be public interfaces—no classes. Method return types must likewise be public. This is due to SQL Object implementation switching from CGLIB to
java.lang.reflect.Proxy
, which only supports interfaces. -
GetHandle
→SqlObject
-
SqlLocator
replacesStatementLocator
, and only applies to SQL Objects. -
@RegisterMapper
divided into@RegisterRowMapper
and@RegisterColumnMapper
. -
@Bind
annotations on SQL Object method parameters can be made optional, by compiling your code with the-parameters
compiler flag enabled. -
@BindIn
→@BindList
, and no longer requires StringTemplate -
On-demand SQL objects don’t play well with methods that return
Iterable
orFluentIterable
. On-demand objects strictly close the handle after each method call, and no longer "hold the door open" for you to finish consuming the interable as they did in v2. This forecloses a major source of connection leaks. -
SQL Objects are no longer closeable — they are either on-demand, or their lifecycle is tied to the lifecycle of the
Handle
they are attached to. -
@BindAnnotation
meta-annotation removed. Use@SqlStatementCustomizingAnnotation
instead. -
@SingleValueResult
→@SingleValue
. The annotation may be used for method return types, or on@SqlBatch
parameters.