Examples of usage

File: get_and_put.py.


Open connection

from pyignite import Client

client = Client()
with client.connect('', 10800):

Create cache

my_cache = client.create_cache('my cache')

Put value in cache

my_cache.put('my key', 42)

Get value from cache

result = my_cache.get('my key')
print(result)  # 42

result = my_cache.get('non-existent key')
print(result)  # None

Get multiple values from cache

result = my_cache.get_all([
    'my key',
    'non-existent key',
print(result)  # {'my key': 42}

Type hints usage

File: type_hints.py

my_cache.put('my key', 42)
# value ‘42’ takes 9 bytes of memory as a LongObject

my_cache.put('my key', 42, value_hint=ShortObject)
# value ‘42’ takes only 3 bytes as a ShortObject

my_cache.put('a', 1)
# ‘a’ is a key of type String

my_cache.put('a', 2, key_hint=CharObject)
# another key ‘a’ of type CharObject was created

value = my_cache.get('a')
# 1

value = my_cache.get('a', key_hint=CharObject)
# 2

# now let us delete both keys at once
    'a',  # a default type key
    ('a', CharObject),  # a key of type CharObject

As a rule of thumb:

  • when a pyignite method or function deals with a single value or key, it has an additional parameter, like value_hint or key_hint, which accepts a parser/constructor class,

  • nearly any structure element (inside dict or list) can be replaced with a two-tuple of (said element, type hint).

Refer the Data Types section for the full list of parser/constructor classes you can use as type hints.


File: expiry_policy.py.

You can enable expiry policy (TTL) by two approaches.

Firstly, expiry policy can be set for entire cache by setting PROP_EXPIRY_POLICY in cache settings dictionary on creation.

ttl_cache = client.create_cache({
    PROP_NAME: 'test',
    PROP_EXPIRY_POLICY: ExpiryPolicy(create=timedelta(seconds=1.0))
ttl_cache.put(1, 1)
print(f"key = {1}, value = {ttl_cache.get(1)}")
# key = 1, value = 1
print(f"key = {1}, value = {ttl_cache.get(1)}")
# key = 1, value = None

Secondly, expiry policy can be set for all cache operations, which are done under decorator. To create it use with_expire_policy()

ttl_cache = simple_cache.with_expire_policy(access=timedelta(seconds=1.0))
ttl_cache.put(1, 1)
print(f"key = {1}, value = {ttl_cache.get(1)}")
# key = 1, value = 1
print(f"key = {1}, value = {ttl_cache.get(1)}")
# key = 1, value = None


File: scans.py.

Cache’s scan() method queries allows you to get the whole contents of the cache, element by element.

Let us put some data in cache.

my_cache = client.create_cache('my cache')
my_cache.put_all({'key_{}'.format(v): v for v in range(20)})
# {
#     'key_0': 0,
#     'key_1': 1,
#     'key_2': 2,
#     ... 20 elements in total...
#     'key_18': 18,
#     'key_19': 19
# }

scan() returns a cursor, that yields two-tuples of key and value. You can iterate through the generated pairs in a safe manner:

with my_cache.scan() as cursor:
    for k, v in cursor:
        print(k, v)
# 'key_17' 17
# 'key_10' 10
# 'key_6' 6,
# ... 20 elements in total...
# 'key_16' 16
# 'key_12' 12

Or, alternatively, you can convert the cursor to dictionary in one go:

with my_cache.scan() as cursor:
# {
#     'key_17': 17,
#     'key_10': 10,
#     'key_6': 6,
#     ... 20 elements in total...
#     'key_16': 16,
#     'key_12': 12
# }

But be cautious: if the cache contains a large set of data, the dictionary may consume too much memory!

Object collections

File: get_and_put_complex.py.

Ignite collection types are represented in pyignite as two-tuples. First comes collection type ID or deserialization hint, which is specific for each of the collection type. Second comes the data value.

from pyignite.datatypes import CollectionObject, MapObject, ObjectArrayObject


For Python prior to 3.6, it might be important to distinguish between ordered (collections.OrderedDict) and unordered (dict) dictionary types, so you could use LINKED_HASH_MAP for the former and HASH_MAP for the latter.

Since CPython 3.6 all dictionaries became de facto ordered. You can always use LINKED_HASH_MAP as a safe default.

my_cache = client.get_or_create_cache('my cache')

value = {1: 'test', 'key': 2.0}

# saving ordered dictionary
type_id = MapObject.LINKED_HASH_MAP
my_cache.put('my dict', (type_id, value))
result = my_cache.get('my dict')
print(result)  # (2, {1: 'test', 'key': 2.0})

# saving unordered dictionary
type_id = MapObject.HASH_MAP
my_cache.put('my dict', (type_id, value))
result = my_cache.get('my dict')
print(result)  # (1, {1: 'test', 'key': 2.0})


See CollectionObject and Ignite documentation on Collection type for the description of various Java collection types. Note that not all of them have a direct Python representative. For example, Python do not have ordered sets (it is indeed recommended to use OrderedDict’s keys and disregard its values).

As for the pyignite, the rules are simple: pass any iterable as a data, and you always get list back.

type_id = CollectionObject.LINKED_LIST
value = [1, '2', 3.0]

my_cache.put('my list', (type_id, value))

result = my_cache.get('my list')
print(result)  # (2, [1, '2', 3.0])

type_id = CollectionObject.HASH_SET
value = [4, 4, 'test', 5.6]

my_cache.put('my set', (type_id, value))

result = my_cache.get('my set')
print(result)  # (3, [5.6, 4, 'test'])

Object array

ObjectArrayObject has a very limited functionality in pyignite, since no type checks can be enforced on its contents. But it still can be used for interoperability with Java.

type_id = ObjectArrayObject.OBJECT
value = [7, '8', 9.0]

    'my array of objects',
    (type_id, value),
    value_hint=ObjectArrayObject  # this hint is mandatory!
result = my_cache.get('my array of objects')
print(result)  # (-1, [7, '8', 9.0])


File: transactions.py.

Client transactions are supported for caches with TRANSACTIONAL mode.

Let’s create transactional cache:

cache = client.get_or_create_cache({
    PROP_NAME: 'tx_cache',

Let’s start a transaction and commit it:

key = 1
with client.tx_start(
) as tx:
    cache.put(key, 'success')

Let’s check that the transaction was committed successfully:

# key=1 value=success
print(f"key={key} value={cache.get(key)}")

Let’s check that raising exception inside with block leads to transaction’s rollback

    with client.tx_start(
        cache.put(key, 'fail')
        raise RuntimeError('test')
except RuntimeError:

# key=1 value=success
print(f"key={key} value={cache.get(key)}")

Let’s check that timed out transaction is successfully rolled back

    with client.tx_start(timeout=1000, label='long-tx') as tx:
        cache.put(key, 'fail')
except CacheError as e:
    # Cache transaction timed out: GridNearTxLocal[...timeout=1000, ... label=long-tx]

# key=1 value=success
print(f"key={key} value={cache.get(key)}")

See more info about transaction’s parameters in a documentation of tx_start()


File: sql.py.

These examples are similar to the ones given in the Apache Ignite SQL Documentation: Getting Started.


First let us establish a connection.

client = Client()
with client.connect('', 10800):

Then create tables. Begin with Country table, than proceed with related tables City and CountryLanguage.

    Name CHAR(52),
    Continent CHAR(50),
    Region CHAR(26),
    SurfaceArea DECIMAL(10,2),
    IndepYear SMALLINT(6),
    Population INT(11),
    LifeExpectancy DECIMAL(3,1),
    GNP DECIMAL(10,2),
    GNPOld DECIMAL(10,2),
    LocalName CHAR(45),
    GovernmentForm CHAR(45),
    HeadOfState CHAR(60),
    Capital INT(11),
    Code2 CHAR(2)
    ID INT(11),
    Name CHAR(35),
    CountryCode CHAR(3),
    District CHAR(20),
    Population INT(11),
    PRIMARY KEY (ID, CountryCode)
) WITH "affinityKey=CountryCode"'''
    CountryCode CHAR(3),
    Language CHAR(30),
    IsOfficial BOOLEAN,
    Percentage DECIMAL(4,1),
    PRIMARY KEY (CountryCode, Language)
) WITH "affinityKey=CountryCode"'''
for query in [

Create indexes.

CITY_CREATE_INDEX = 'CREATE INDEX idx_country_code ON city (CountryCode)'
LANGUAGE_CREATE_INDEX = 'CREATE INDEX idx_lang_country_code ON CountryLanguage (CountryCode)'

Fill tables with data.

    Code, Name, Continent, Region,
    SurfaceArea, IndepYear, Population,
    LifeExpectancy, GNP, GNPOld,
    LocalName, GovernmentForm, HeadOfState,
    Capital, Code2
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'''
    ID, Name, CountryCode, District, Population
) VALUES (?, ?, ?, ?, ?)'''
    CountryCode, Language, IsOfficial, Percentage
) VALUES (?, ?, ?, ?)'''
for row in TestData.COUNTRY:
    client.sql(Query.COUNTRY_INSERT, query_args=row)

for row in TestData.CITY:
    client.sql(Query.CITY_INSERT, query_args=row)

for row in TestData.LANGUAGE:
    client.sql(Query.LANGUAGE_INSERT, query_args=row)

Data samples are taken from PyIgnite GitHub repository.

That concludes the preparation of data. Now let us answer some questions.

What are the 10 largest cities in our data sample (population-wise)?

with client.sql('SELECT name, population FROM City ORDER BY population DESC LIMIT 10') as cursor:
    print('Most 10 populated cities:')
    for row in cursor:
# Most 10 populated cities:
# ['Mumbai (Bombay)', 10500000]
# ['Shanghai', 9696300]
# ['New York', 8008278]
# ['Peking', 7472000]
# ['Delhi', 7206704]
# ['Chongqing', 6351600]
# ['Tianjin', 5286800]
# ['Calcutta [Kolkata]', 4399819]
# ['Wuhan', 4344600]
# ['Harbin', 4289800]

The sql() method returns a generator, that yields the resulting rows.

What are the 10 most populated cities throughout the 3 chosen countries?

If you set the include_field_names argument to True, the sql() method will generate a list of column names as a first yield. You can access field names with Python built-in next function.

SELECT country.name as country_name, city.name as city_name, MAX(city.population) AS max_pop FROM country
    JOIN city ON city.countrycode = country.code
    WHERE country.code IN ('USA','IND','CHN')
    GROUP BY country.name, city.name ORDER BY max_pop DESC LIMIT 10

with client.sql(MOST_POPULATED_IN_3_COUNTRIES, include_field_names=True) as cursor:
    print('Most 10 populated cities in USA, India and China:')
    table_str_pattern = '{:15}\t| {:20}\t| {}'
    print('*' * 50)
    for row in cursor:
# Most 10 populated cities in USA, India and China:
# COUNTRY_NAME   	| CITY_NAME           	| MAX_POP
# **************************************************
# India          	| Mumbai (Bombay)     	| 10500000
# China          	| Shanghai            	| 9696300
# United States  	| New York            	| 8008278
# China          	| Peking              	| 7472000
# India          	| Delhi               	| 7206704
# China          	| Chongqing           	| 6351600
# China          	| Tianjin             	| 5286800
# India          	| Calcutta [Kolkata]  	| 4399819
# China          	| Wuhan               	| 4344600
# China          	| Harbin              	| 4289800

Display all the information about a given city

with client.sql('SELECT * FROM City WHERE id = ?', query_args=[3802], include_field_names=True) as cursor:
    field_names = next(cursor)
    field = list(*cursor)
    print('City info:')
    for field_name, field_value in zip(field_names * len(field), field):
        print(f'{field_name}: {field_value}')
# City info:
# ID: 3802
# NAME: Detroit
# DISTRICT: Michigan
# POPULATION: 951270

Finally, delete the tables used in this example with the following queries:

for table_name in TableNames:
    result = client.sql(Query.DROP_TABLE.format(table_name.value))

Complex objects

File: binary_basics.py.

Complex object (that is often called ‘Binary object’) is an Ignite data type, that is designed to represent a Java class. It have the following features:

  • have a unique ID (type id), which is derives from a class name (type name),

  • have one or more associated schemas, that describes its inner structure (the order, names and types of its fields). Each schema have its own ID,

  • have an optional version number, that is aimed towards the end users to help them distinguish between objects of the same type, serialized with different schemas.

Unfortunately, these distinctive features of the Complex object have few to no meaning outside of Java language. Python class can not be defined by its name (it is not unique), ID (object ID in Python is volatile; in CPython it is just a pointer in the interpreter’s memory heap), or complex of its fields (they do not have an associated data types, moreover, they can be added or deleted in run-time). For the pyignite user it means that for all purposes of storing native Python data it is better to use Ignite CollectionObject or MapObject data types.

However, for interoperability purposes, pyignite has a mechanism of creating special Python classes to read or write Complex objects. These classes have an interface, that simulates all the features of the Complex object: type name, type ID, schema, schema ID, and version number.

Assuming that one concrete class for representing one Complex object can severely limit the user’s data manipulation capabilities, all the functionality said above is implemented through the metaclass: GenericObjectMeta. This metaclass is used automatically when reading Complex objects.

person = person_cache.get(1)
# Person
print(person.__class__ is Person)
# True if `Person` was registered automatically (on writing)
# or manually (using `client.register_binary_type()` method).
# False otherwise
# Person(first_name='Ivan', last_name='Ivanov', age=33, version=1)

Here you can see how GenericObjectMeta uses attrs package internally for creating nice __init__() and __repr__() methods.

In this case the autogenerated dataclass’s name Person is exactly matches the type name of the Complex object it represents (the content of the type_name property). But when Complex object’s class name contains characters, that can not be used in a Python identifier, for example:

  • ., when fully qualified Java class names are used,

  • $, a common case for Scala classes,

  • +, internal class name separator in C#,

then pyignite can not maintain this match. In such cases pyignite tries to sanitize a type name to derive a “good” dataclass name from it.

If your code needs consistent naming between the server and the client, make sure that your Ignite cluster is configured to use simple class names.

Anyway, you can reuse the autogenerated dataclass for subsequent writes:

Person = person.__class__
    1, Person(first_name='Ivan', last_name='Ivanov', age=33)

GenericObjectMeta can also be used directly for creating custom classes:

class Person(metaclass=GenericObjectMeta, schema={
    'first_name': String,
    'last_name': String,
    'age': IntObject

Note how the Person class is defined. schema is a GenericObjectMeta metaclass parameter. Another important GenericObjectMeta parameter is a type_name, but it is optional and defaults to the class name (‘Person’ in our example).

Note also, that Person do not have to define its own attributes, methods and properties (pass), although it is completely possible.

Now, when your custom Person class is created, you are ready to send data to Ignite server using its objects. The client will implicitly register your class as soon as the first Complex object is sent. If you intend to use your custom class for reading existing Complex objects’ values before all, you must register said class explicitly with your client:


Now, when we dealt with the basics of pyignite implementation of Complex Objects, let us move on to more elaborate examples.


File: read_binary.py.

Ignite SQL uses Complex objects internally to represent keys and rows in SQL tables. Normally SQL data is accessed via queries (see SQL), so we will consider the following example solely for the demonstration of how Binary objects (not Ignite SQL) work.

In the previous examples we have created some SQL tables. Let us do it again and examine the Ignite storage afterwards.

result = client.get_cache_names()

We can see that Ignite created a cache for each of our tables. The caches are conveniently named using ‘SQL_<schema name>_<table name>’ pattern.

Now let us examine a configuration of a cache that contains SQL data using a settings property.

city_cache = client.get_or_create_cache('SQL_PUBLIC_CITY')

# [{'field_name_aliases': [{'alias': 'DISTRICT', 'field_name': 'DISTRICT'},
#                              {'alias': 'POPULATION', 'field_name': 'POPULATION'},
#                              {'alias': 'COUNTRYCODE', 'field_name': 'COUNTRYCODE'},
#                              {'alias': 'ID', 'field_name': 'ID'},
#                              {'alias': 'NAME', 'field_name': 'NAME'}],
#       'key_field_name': None,
#       'key_type_name': 'SQL_PUBLIC_CITY_081f37cc8ac72b10f08ab1273b744497_KEY',
#       'query_fields': [{'default_value': None,
#                         'is_key_field': True,
#                         'is_notnull_constraint_field': False,
#                         'name': 'ID',
#                         'precision': -1,
#                         'scale': -1,
#                         'type_name': 'java.lang.Integer'},
#                        {'default_value': None,
#                         'is_key_field': False,
#                         'is_notnull_constraint_field': False,
#                         'name': 'NAME',
#                         'precision': 35,
#                         'scale': -1,
#                         'type_name': 'java.lang.String'},
#                        {'default_value': None,
#                         'is_key_field': True,
#                         'is_notnull_constraint_field': False,
#                         'name': 'COUNTRYCODE',
#                         'precision': 3,
#                         'scale': -1,
#                         'type_name': 'java.lang.String'},
#                        {'default_value': None,
#                         'is_key_field': False,
#                         'is_notnull_constraint_field': False,
#                         'name': 'DISTRICT',
#                         'precision': 20,
#                         'scale': -1,
#                         'type_name': 'java.lang.String'},
#                        {'default_value': None,
#                         'is_key_field': False,
#                         'is_notnull_constraint_field': False,
#                         'name': 'POPULATION',
#                         'precision': -1,
#                         'scale': -1,
#                         'type_name': 'java.lang.Integer'}],
#       'query_indexes': [],
#       'table_name': 'CITY',
#       'value_field_name': None,
#       'value_type_name': 'SQL_PUBLIC_CITY_081f37cc8ac72b10f08ab1273b744497'}]

The values of value_type_name and key_type_name are names of the binary types. The City table’s key fields are stored using key_type_name type, and the other fields − value_type_name type.

Now when we have the cache, in which the SQL data resides, and the names of the key and value data types, we can read the data without using SQL functions and verify the correctness of the result.

with city_cache.scan() as cursor:
    for line in next(cursor):
#  'ID': 3793,
#  'type_name': 'SQL_PUBLIC_CITY_081f37cc8ac72b10f08ab1273b744497_KEY'}
# {'DISTRICT': 'New York',
#  'NAME': 'New York',
#  'POPULATION': 8008278,
#  'type_name': 'SQL_PUBLIC_CITY_081f37cc8ac72b10f08ab1273b744497'}

What we see is a tuple of key and value, extracted from the cache. Both key and value are represent Complex objects. The dataclass names are the same as the value_type_name and key_type_name cache settings. The objects’ fields correspond to the SQL query.


File: create_binary.py.

Now, that we aware of the internal structure of the Ignite SQL storage, we can create a table and put data in it using only key-value functions.

For example, let us create a table to register High School students: a rough equivalent of the following SQL DDL statement:

    sid CHAR(9),
    name VARCHAR(20),
    login CHAR(8),
    age INTEGER(11),
    gpa REAL

These are the necessary steps to perform the task.

  1. Create table cache.

student_cache = client.create_cache({
            'table_name': 'Student'.upper(),
            'key_field_name': 'SID',
            'key_type_name': 'java.lang.Integer',
            'field_name_aliases': [],
            'query_fields': [
                    'name': 'SID',
                    'type_name': 'java.lang.Integer',
                    'is_key_field': True,
                    'is_notnull_constraint_field': True,
                    'name': 'NAME',
                    'type_name': 'java.lang.String',
                    'name': 'LOGIN',
                    'type_name': 'java.lang.String',
                    'name': 'AGE',
                    'type_name': 'java.lang.Integer',
                    'name': 'GPA',
                    'type_name': 'java.math.Double',
            'query_indexes': [],
            'value_type_name': 'SQL_PUBLIC_STUDENT_TYPE',
            'value_field_name': None,
  1. Define Complex object data class.

class Student(
    schema={'NAME': String, 'LOGIN': String, 'AGE': IntObject, 'GPA': DoubleObject}
  1. Insert row.

    Student(LOGIN='jdoe', NAME='John Doe', AGE=17, GPA=4.25),

Now let us make sure that our cache really can be used with SQL functions.

with client.sql(r'SELECT * FROM Student', include_field_names=True) as cursor:
    # ['SID', 'NAME', 'LOGIN', 'AGE', 'GPA']

    # [1, 'John Doe', 'jdoe', 17, 4.25]

Note, however, that the cache we create can not be dropped with DDL command. It should be deleted as any other key-value cache.

# client.sql(DROP_QUERY)
# pyignite.exceptions.SQLError: class org.apache.ignite.IgniteCheckedException:
# Only cache created with CREATE TABLE may be removed with DROP TABLE



File: migrate_binary.py.

Suppose we have an accounting app that stores its data in key-value format. Our task would be to introduce the following changes to the original expense voucher’s format and data:

  • rename date to expense_date,

  • add report_date,

  • set report_date to the current date if reported is True, None if False,

  • delete reported.

First get the vouchers’ cache.

accounting = client.get_or_create_cache('accounting')

If you do not store the schema of the Complex object in code, you can obtain it as a dataclass property with query_binary_type() method.

data_classes = client.query_binary_type('ExpenseVoucher')
# {
#     {547629991: <class 'pyignite.binary.ExpenseVoucher'>, -231598180: <class '__main__.ExpenseVoucher'>}
# }

Let us modify the schema and create a new Complex object class with an updated schema.

s_id, data_class = data_classes.popitem()
schema = data_class.schema

schema['expense_date'] = schema['date']
del schema['date']
schema['report_date'] = DateObject
del schema['reported']
schema['sum'] = DecimalObject

# define new data class
class ExpenseVoucherV2(

Now migrate the data from the old schema to the new one.

def migrate(cache, data, new_class):
    """ Migrate given data pages. """
    for key, old_value in data:
        # read data
        print('Old value:')
        # Old value:
        # {'cashier_id': 10,
        #  'date': datetime.datetime(2017, 12, 1, 0, 0),
        #  'purpose': 'Aenean eget bibendum lorem, a luctus libero',
        #  'recipient': 'Joe Bloggs',
        #  'reported': True,
        #  'sum': Decimal('135.79'),
        #  'type_name': 'ExpenseVoucher'}

        # create new binary object
        new_value = new_class()

        # process data
        new_value.sum = old_value.sum
        new_value.purpose = old_value.purpose
        new_value.recipient = old_value.recipient
        new_value.cashier_id = old_value.cashier_id
        new_value.expense_date = old_value.date
        new_value.report_date = date.today() if old_value.reported else None

        # replace data
        cache.put(key, new_value)

        # verify data
        verify = cache.get(key)
        print('New value:')
        # New value:
        # {'cashier_id': 10,
        #  'expense_date': datetime.datetime(2017, 12, 1, 0, 0),
        #  'purpose': 'Aenean eget bibendum lorem, a luctus libero',
        #  'recipient': 'Joe Bloggs',
        #  'report_date': datetime.datetime(2022, 5, 6, 0, 0),
        #  'sum': Decimal('135.79'),
        #  'type_name': 'ExpenseVoucher'}

        print('-' * 20)

# migrate data
with client.connect('', 10800):
    accounting = client.get_or_create_cache('accounting')

    with accounting.scan() as cursor:
        migrate(accounting, cursor, ExpenseVoucherV2)

At this moment all the fields, defined in both of our schemas, can be available in the resulting binary object, depending on which schema was used when writing it using put() or similar methods. Ignite Binary API do not have the method to delete Complex object schema; all the schemas ever defined will stay in cluster until its shutdown.

This versioning mechanism is quite simple and robust, but it have its limitations. The main thing is: you can not change the type of the existing field. If you try, you will be greeted with the following message:

`org.apache.ignite.binary.BinaryObjectException: Wrong value has been set [typeName=SomeType, fieldName=f1, fieldType=String, assignedValueType=int]`

As an alternative, you can rename the field or create a new Complex object.


File: failover.py.

When connection to the server is broken or timed out, Client object propagates an original exception (OSError or SocketError), but keeps its constructor’s parameters intact and tries to reconnect transparently.

When Client detects that all nodes in the list are failed without the possibility of restoring connection, it raises a special ReconnectError exception.

Gather 3 Ignite nodes on localhost into one cluster and run:

from pyignite import Client
from pyignite.datatypes.cache_config import CacheMode
from pyignite.datatypes.prop_codes import PROP_NAME, PROP_CACHE_MODE, PROP_BACKUPS_NUMBER
from pyignite.exceptions import SocketError

nodes = [
    ('', 10800),
    ('', 10801),
    ('', 10802),

client = Client(timeout=4.0)
with client.connect(nodes):

    my_cache = client.get_or_create_cache({
        PROP_NAME: 'my_cache',
    my_cache.put('test_key', 0)
    test_value = 0

    # abstract main loop
    while True:
            # do the work
            test_value = my_cache.get('test_key') or 0
            my_cache.put('test_key', test_value + 1)
        except (OSError, SocketError) as e:
            # recover from error (repeat last command, check data
            # consistency or just continue − depends on the task)
            print(f'Error: {e}')
            print(f'Last value: {test_value}')

Then try shutting down and restarting nodes, and see what happens.

# Connected
# Error: Connection broken.
# Last value: 2650
# Reconnecting
# Error: Connection broken.
# Last value: 10204
# Reconnecting
# Error: Connection broken.
# Last value: 18932
# Reconnecting
# Traceback (most recent call last):
#   ...
# pyignite.exceptions.ReconnectError: Can not reconnect: out of nodes.

Client reconnection do not require an explicit user action, like calling a special method or resetting a parameter. It means that instead of checking the connection status it is better for pyignite user to just try the supposed data operations and catch the resulting exception.


There are some special requirements for testing SSL connectivity.

The Ignite server must be configured for securing the binary protocol port. The server configuration process can be split up into these basic steps:

  1. Create a key store and a trust store using Java keytool. When creating the trust store, you will probably need a client X.509 certificate. You will also need to export the server X.509 certificate to include in the client chain of trust.

  2. Turn on the SslContextFactory for your Ignite cluster according to this document: Securing Connection Between Nodes.

  3. Tell Ignite to encrypt data on its thin client port, using the settings for ClientConnectorConfiguration. If you only want to encrypt connection, not to validate client’s certificate, set sslClientAuth property to false. You’ll still have to set up the trust store on step 1 though.

Client SSL settings is summarized here: Client.

To use the SSL encryption without certificate validation just use_ssl.

from pyignite import Client

client = Client(use_ssl=True)
client.connect('', 10800)

To identify the client, create an SSL keypair and a certificate with openssl command and use them in this manner:

from pyignite import Client

client = Client(
client.connect('ignite-example.com', 10800)

To check the authenticity of the server, get the server certificate or certificate chain and provide its path in the ssl_ca_certfile parameter.

import ssl

from pyignite import Client

client = Client(
client.connect('ignite-example.com', 10800)

You can also provide such parameters as the set of ciphers (ssl_ciphers) and the SSL version (ssl_version), if the defaults (ssl._DEFAULT_CIPHERS and TLS 1.1) do not suit you.

Password authentication

To authenticate you must set authenticationEnabled property to true and enable persistance in Ignite XML configuration file, as described in Authentication section of Ignite documentation.

Be advised that sending credentials over the open channel is greatly discouraged, since they can be easily intercepted. Supplying credentials automatically turns SSL on from the client side. It is highly recommended to secure the connection to the Ignite server, as described in SSL/TLS example, in order to use password authentication.

Then just supply username and password parameters to Client constructor.

from pyignite import Client

client = Client(username='ignite', password='ignite')
client.connect('ignite-example.com', 10800)

If you still do not wish to secure the connection is spite of the warning, then disable SSL explicitly on creating the client object:

client = Client(username='ignite', password='ignite', use_ssl=False)

Note, that it is not possible for Ignite thin client to obtain the cluster’s authentication settings through the binary protocol. Unexpected credentials are simply ignored by the server. In the opposite case, the user is greeted with the following message:

# pyignite.exceptions.HandshakeError: Handshake error: Unauthenticated sessions are prohibited.