Examples of usage

File: get_and_put.py.

Key-value

Open connection

from pyignite import Client

client = Client()
client.connect('127.0.0.1', 10800)

Create cache

my_cache = client.create_cache('my cache')

Put value in cache

my_cache.put('my key', 42)

Get value from cache

result = my_cache.get('my key')
print(result)  # 42

result = my_cache.get('non-existent key')
print(result)  # None

Get multiple values from cache

result = my_cache.get_all([
    'my key',
    'non-existent key',
    'other-key',
])
print(result)  # {'my key': 42}

Type hints usage

File: type_hints.py

my_cache.put('my key', 42)
# value ‘42’ takes 9 bytes of memory as a LongObject

my_cache.put('my key', 42, value_hint=ShortObject)
# value ‘42’ takes only 3 bytes as a ShortObject

my_cache.put('a', 1)
# ‘a’ is a key of type String

my_cache.put('a', 2, key_hint=CharObject)
# another key ‘a’ of type CharObject was created

value = my_cache.get('a')
print(value)
# 1

value = my_cache.get('a', key_hint=CharObject)
print(value)
# 2

# now let us delete both keys at once
my_cache.remove_keys([
    'a',                # a default type key
    ('a', CharObject),  # a key of type CharObject
])

As a rule of thumb:

  • when a pyignite method or function deals with a single value or key, it has an additional parameter, like value_hint or key_hint, which accepts a parser/constructor class,
  • nearly any structure element (inside dict or list) can be replaced with a two-tuple of (said element, type hint).

Refer the Data Types section for the full list of parser/constructor classes you can use as type hints.

Scan

File: scans.py.

Cache’s scan() method queries allows you to get the whole contents of the cache, element by element.

Let us put some data in cache.

my_cache.put_all({'key_{}'.format(v): v for v in range(20)})
# {
#     'key_0': 0,
#     'key_1': 1,
#     'key_2': 2,
#     ... 20 elements in total...
#     'key_18': 18,
#     'key_19': 19
# }

result = my_cache.scan()

scan() returns a generator, that yields two-tuples of key and value. You can iterate through the generated pairs in a safe manner:

for k, v in result:
    print(k, v)
# 'key_17' 17
# 'key_10' 10
# 'key_6' 6,
# ... 20 elements in total...
# 'key_16' 16
# 'key_12' 12

Or, alternatively, you can convert the generator to dictionary in one go:

print(dict(result))
# {
#     'key_17': 17,
#     'key_10': 10,
#     'key_6': 6,
#     ... 20 elements in total...
#     'key_16': 16,
#     'key_12': 12
# }

But be cautious: if the cache contains a large set of data, the dictionary may eat too much memory!

Do cleanup

Destroy created cache and close connection.

my_cache.destroy()
client.close()

SQL

File: sql.py.

These examples are similar to the ones given in the Apache Ignite SQL Documentation: Getting Started.

Setup

First let us establish a connection.

client = Client()
client.connect('127.0.0.1', 10800)

Then create tables. Begin with Country table, than proceed with related tables City and CountryLanguage.

COUNTRY_CREATE_TABLE_QUERY = '''CREATE TABLE Country (
    Code CHAR(3) PRIMARY KEY,
    Name CHAR(52),
    Continent CHAR(50),
    Region CHAR(26),
    SurfaceArea DECIMAL(10,2),
    IndepYear SMALLINT(6),
    Population INT(11),
    LifeExpectancy DECIMAL(3,1),
    GNP DECIMAL(10,2),
    GNPOld DECIMAL(10,2),
    LocalName CHAR(45),
    GovernmentForm CHAR(45),
    HeadOfState CHAR(60),
    Capital INT(11),
    Code2 CHAR(2)
)'''

CITY_CREATE_TABLE_QUERY = '''CREATE TABLE City (
    ID INT(11),
    Name CHAR(35),
    CountryCode CHAR(3),
    District CHAR(20),
    Population INT(11),
    PRIMARY KEY (ID, CountryCode)
) WITH "affinityKey=CountryCode"'''

LANGUAGE_CREATE_TABLE_QUERY = '''CREATE TABLE CountryLanguage (
    CountryCode CHAR(3),
    Language CHAR(30),
    IsOfficial BOOLEAN,
    Percentage DECIMAL(4,1),
    PRIMARY KEY (CountryCode, Language)
) WITH "affinityKey=CountryCode"'''

for query in [
    COUNTRY_CREATE_TABLE_QUERY,
    CITY_CREATE_TABLE_QUERY,
    LANGUAGE_CREATE_TABLE_QUERY,
]:
    client.sql(query)

Create indexes.

CITY_CREATE_INDEX = '''
CREATE INDEX idx_country_code ON city (CountryCode)'''

LANGUAGE_CREATE_INDEX = '''
CREATE INDEX idx_lang_country_code ON CountryLanguage (CountryCode)'''

for query in [CITY_CREATE_INDEX, LANGUAGE_CREATE_INDEX]:
    client.sql(query)

Fill tables with data.

COUNTRY_INSERT_QUERY = '''INSERT INTO Country(
    Code, Name, Continent, Region,
    SurfaceArea, IndepYear, Population,
    LifeExpectancy, GNP, GNPOld,
    LocalName, GovernmentForm, HeadOfState,
    Capital, Code2
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'''

CITY_INSERT_QUERY = '''INSERT INTO City(
    ID, Name, CountryCode, District, Population
) VALUES (?, ?, ?, ?, ?)'''

LANGUAGE_INSERT_QUERY = '''INSERT INTO CountryLanguage(
    CountryCode, Language, IsOfficial, Percentage
) VALUES (?, ?, ?, ?)'''

for row in COUNTRY_DATA:
    client.sql(COUNTRY_INSERT_QUERY, query_args=row)

for row in CITY_DATA:
    client.sql(CITY_INSERT_QUERY, query_args=row)

for row in LANGUAGE_DATA:
    client.sql(LANGUAGE_INSERT_QUERY, query_args=row)

Data samples are taken from Ignite GitHub repository.

That concludes the preparation of data. Now let us answer some questions.

What are the 10 largest cities in our data sample (population-wise)?


MOST_POPULATED_QUERY = '''
SELECT name, population FROM City ORDER BY population DESC LIMIT 10'''

result = client.sql(MOST_POPULATED_QUERY)
print('Most 10 populated cities:')
for row in result:
    print(row)
# Most 10 populated cities:
# ['Mumbai (Bombay)', 10500000]
# ['Shanghai', 9696300]
# ['New York', 8008278]
# ['Peking', 7472000]
# ['Delhi', 7206704]
# ['Chongqing', 6351600]
# ['Tianjin', 5286800]
# ['Calcutta [Kolkata]', 4399819]
# ['Wuhan', 4344600]
# ['Harbin', 4289800]

The sql() method returns a generator, that yields the resulting rows.

What are the 10 most populated cities throughout the 3 chosen countries?

If you set the include_field_names argument to True, the sql() method will generate a list of column names as a first yield. You can access field names with Python built-in next function.

MOST_POPULATED_IN_3_COUNTRIES_QUERY = '''
SELECT country.name as country_name, city.name as city_name, MAX(city.population) AS max_pop FROM country
    JOIN city ON city.countrycode = country.code
    WHERE country.code IN ('USA','IND','CHN')
    GROUP BY country.name, city.name ORDER BY max_pop DESC LIMIT 10
'''

result = client.sql(
    MOST_POPULATED_IN_3_COUNTRIES_QUERY,
    include_field_names=True,
)
print('Most 10 populated cities in USA, India and China:')
print(next(result))
print('----------------------------------------')
for row in result:
    print(row)
# Most 10 populated cities in USA, India and China:
# ['COUNTRY_NAME', 'CITY_NAME', 'MAX_POP']
# ----------------------------------------
# ['India', 'Mumbai (Bombay)', 10500000]
# ['China', 'Shanghai', 9696300]
# ['United States', 'New York', 8008278]
# ['China', 'Peking', 7472000]
# ['India', 'Delhi', 7206704]
# ['China', 'Chongqing', 6351600]
# ['China', 'Tianjin', 5286800]
# ['India', 'Calcutta [Kolkata]', 4399819]
# ['China', 'Wuhan', 4344600]
# ['China', 'Harbin', 4289800]

Display all the information about a given city

CITY_INFO_QUERY = '''SELECT * FROM City WHERE id = ?'''

result = client.sql(
    CITY_INFO_QUERY,
    query_args=[3802],
    include_field_names=True,
)
field_names = next(result)
field_data = list(*result)

print('City info:')
for field_name, field_value in zip(field_names*len(field_data), field_data):
    print('{}: {}'.format(field_name, field_value))
# City info:
# ID: 3802
# NAME: Detroit
# COUNTRYCODE: USA
# DISTRICT: Michigan
# POPULATION: 951270

Finally, delete the tables used in this example with the following queries:

DROP_TABLE_QUERY = '''DROP TABLE {} IF EXISTS'''

for table_name in [
    CITY_TABLE_NAME,
    LANGUAGE_TABLE_NAME,
    COUNTRY_TABLE_NAME,
]:
    result = client.sql(DROP_TABLE_QUERY.format(table_name))

Complex objects

File: binary_basics.py.

Complex object (that is often called ‘Binary object’) is an Ignite data type, that is designed to represent a Java class. It have the following features:

  • have a unique ID (type id), which is derives from a class name (type name),
  • have one or more associated schemas, that describes its inner structure (the order, names and types of its fields). Each schema have its own ID,
  • have an optional version number, that is aimed towards the end users to help them distinguish between objects of the same type, serialized with different schemas.

Unfortunately, these distinctive features of the Complex object have few to no meaning outside of Java language. Python class can not be defined by its name (it is not unique), ID (object ID in Python is volatile; in CPython it is just a pointer in the interpreter’s memory heap), or complex of its fields (they do not have an associated data types, moreover, they can be added or deleted in run-time). For the pyignite user it means that for all purposes of storing native Python data it is better to use Ignite CollectionObject or MapObject data types.

However, for interoperability purposes, pyignite has a mechanism of creating special Python classes to read or write Complex objects. These classes have an interface, that simulates all the features of the Complex object: type name, type ID, schema, schema ID, and version number.

Assuming that one concrete class for representing one Complex object can severely limit the user’s data manipulation capabilities, all the functionality said above is implemented through the metaclass: GenericObjectMeta. This metaclass is used automatically when reading Complex objects.

from pyignite import Client, GenericObjectMeta
from pyignite.datatypes import *

client = Client()
client.connect('localhost', 10800)

person_cache = client.get_or_create_cache('person')

person = person_cache.get(1)
print(person.__class__.__name__)
# Person

print(person)
# Person(first_name='Ivan', last_name='Ivanov', age=33, version=1)

Here you can see how GenericObjectMeta uses attrs package internally for creating nice __init__() and __repr__() methods.

You can reuse the autogenerated class for subsequent writes:

Person = person.__class__

person_cache.put(
    1, Person(first_name='Ivan', last_name='Ivanov', age=33)
)

GenericObjectMeta can also be used directly for creating custom classes:

class Person(metaclass=GenericObjectMeta, schema=OrderedDict([
    ('first_name', String),
    ('last_name', String),
    ('age', IntObject),
])):
    pass

Note how the Person class is defined. schema is a GenericObjectMeta metaclass parameter. Another important GenericObjectMeta parameter is a type_name, but it is optional and defaults to the class name (‘Person’ in our example).

Note also, that Person do not have to define its own attributes, methods and properties (pass), although it is completely possible.

Now, when your custom Person class is created, you are ready to send data to Ignite server using its objects. The client will implicitly register your class as soon as the first Complex object is sent. If you intend to use your custom class for reading existing Complex objects’ values before all, you must register said class explicitly with your client:

client.register_binary_type(Person)

Now, when we dealt with the basics of pyignite implementation of Complex Objects, let us move on to more elaborate examples.

Read

File: read_binary.py.

Ignite SQL uses Complex objects internally to represent keys and rows in SQL tables. Normally SQL data is accessed via queries (see SQL), so we will consider the following example solely for the demonstration of how Binary objects (not Ignite SQL) work.

In the previous examples we have created some SQL tables. Let us do it again and examine the Ignite storage afterwards.

result = client.get_cache_names()
print(result)
# [
#     'SQL_PUBLIC_CITY',
#     'SQL_PUBLIC_COUNTRY',
#     'PUBLIC',
#     'SQL_PUBLIC_COUNTRYLANGUAGE'
# ]

We can see that Ignite created a cache for each of our tables. The caches are conveniently named using ‘SQL_<schema name>_<table name>’ pattern.

Now let us examine a configuration of a cache that contains SQL data using a settings property.

city_cache = client.get_or_create_cache('SQL_PUBLIC_CITY')
print(city_cache.settings[PROP_NAME])
# 'SQL_PUBLIC_CITY'

print(city_cache.settings[PROP_QUERY_ENTITIES])
# {
#     'key_type_name': (
#         'SQL_PUBLIC_CITY_9ac8e17a_2f99_45b7_958e_06da32882e9d_KEY'
#     ),
#     'value_type_name': (
#         'SQL_PUBLIC_CITY_9ac8e17a_2f99_45b7_958e_06da32882e9d'
#     ),
#     'table_name': 'CITY',
#     'query_fields': [
#         ...
#     ],
#     'field_name_aliases': [
#         ...
#     ],
#     'query_indexes': []
# }

The values of value_type_name and key_type_name are names of the binary types. The City table’s key fields are stored using key_type_name type, and the other fields − value_type_name type.

Now when we have the cache, in which the SQL data resides, and the names of the key and value data types, we can read the data without using SQL functions and verify the correctness of the result.

result = city_cache.scan()
print(next(result))
# (
#     SQL_PUBLIC_CITY_6fe650e1_700f_4e74_867d_58f52f433c43_KEY(
#         ID=1890,
#         COUNTRYCODE='CHN',
#         version=1
#     ),
#     SQL_PUBLIC_CITY_6fe650e1_700f_4e74_867d_58f52f433c43(
#         NAME='Shanghai',
#         DISTRICT='Shanghai',
#         POPULATION=9696300,
#         version=1
#     )
# )

What we see is a tuple of key and value, extracted from the cache. Both key and value are represent Complex objects. The dataclass names are the same as the value_type_name and key_type_name cache settings. The objects’ fields correspond to the SQL query.

Create

File: create_binary.py.

Now, that we aware of the internal structure of the Ignite SQL storage, we can create a table and put data in it using only key-value functions.

For example, let us create a table to register High School students: a rough equivalent of the following SQL DDL statement:

CREATE TABLE Student (
    sid CHAR(9),
    name VARCHAR(20),
    login CHAR(8),
    age INTEGER(11),
    gpa REAL
)

These are the necessary steps to perform the task.

  1. Create table cache.
client = Client()
client.connect('127.0.0.1', 10800)

student_cache = client.create_cache({
        PROP_NAME: 'SQL_PUBLIC_STUDENT',
        PROP_SQL_SCHEMA: 'PUBLIC',
        PROP_QUERY_ENTITIES: [
            {
                'table_name': 'Student'.upper(),
                'key_field_name': 'SID',
                'key_type_name': 'java.lang.Integer',
                'field_name_aliases': [],
                'query_fields': [
                    {
                        'name': 'SID',
                        'type_name': 'java.lang.Integer',
                        'is_key_field': True,
                        'is_notnull_constraint_field': True,
                    },
                    {
                        'name': 'NAME',
                        'type_name': 'java.lang.String',
                    },
                    {
                        'name': 'LOGIN',
                        'type_name': 'java.lang.String',
                    },
                    {
                        'name': 'AGE',
                        'type_name': 'java.lang.Integer',
                    },
                    {
                        'name': 'GPA',
                        'type_name': 'java.math.Double',
                    },
                ],
                'query_indexes': [],
                'value_type_name': 'SQL_PUBLIC_STUDENT_TYPE',
                'value_field_name': None,
            },
        ],
    })
  1. Define Complex object data class.
class Student(
    metaclass=GenericObjectMeta,
    type_name='SQL_PUBLIC_STUDENT_TYPE',
    schema=OrderedDict([
        ('NAME', String),
        ('LOGIN', String),
        ('AGE', IntObject),
        ('GPA', DoubleObject),
    ])
):
    pass
  1. Insert row.
student_cache.put(
    1,
    Student(LOGIN='jdoe', NAME='John Doe', AGE=17, GPA=4.25),
    key_hint=IntObject
)

Now let us make sure that our cache really can be used with SQL functions.

result = client.sql(
    r'SELECT * FROM Student',
    include_field_names=True
)
print(next(result))
# ['SID', 'NAME', 'LOGIN', 'AGE', 'GPA']

print(*result)
# [1, 'John Doe', 'jdoe', 17, 4.25]

Note, however, that the cache we create can not be dropped with DDL command.

# DROP_QUERY = 'DROP TABLE Student'
# client.sql(DROP_QUERY)
#
# pyignite.exceptions.SQLError: class org.apache.ignite.IgniteCheckedException:
# Only cache created with CREATE TABLE may be removed with DROP TABLE
# [cacheName=SQL_PUBLIC_STUDENT]

It should be deleted as any other key-value cache.

student_cache.destroy()

Migrate

File: migrate_binary.py.

Suppose we have an accounting app that stores its data in key-value format. Our task would be to introduce the following changes to the original expense voucher’s format and data:

  • rename date to expense_date,
  • add report_date,
  • set report_date to the current date if reported is True, None if False,
  • delete reported.

First get the vouchers’ cache.

client = Client()
client.connect('127.0.0.1', 10800)

accounting = client.get_or_create_cache('accounting')

If you do not store the schema of the Complex object in code, you can obtain it as a dataclass property with query_binary_type() method.

data_classes = client.query_binary_type('ExpenseVoucher')
print(data_classes)
# {
#     -231598180: <class '__main__.ExpenseVoucher'>
# }

s_id, data_class = data_classes.popitem()
schema = data_class.schema

Let us modify the schema and create a new Complex object class with an updated schema.

schema['expense_date'] = schema['date']
del schema['date']
schema['report_date'] = DateObject
del schema['reported']
schema['sum'] = DecimalObject


# define new data class
class ExpenseVoucherV2(
    metaclass=GenericObjectMeta,
    type_name='ExpenseVoucher',
    schema=schema,
):
    pass

Now migrate the data from the old schema to the new one.

def migrate(cache, data, new_class):
    """ Migrate given data pages. """
    for key, old_value in data:
        # read data
        print(old_value)
        # ExpenseVoucher(
        #     date=datetime(2017, 9, 21, 0, 0),
        #     reported=True,
        #     purpose='Praesent eget fermentum massa',
        #     sum=Decimal('666.67'),
        #     recipient='John Doe',
        #     cashier_id=8,
        #     version=1
        # )

        # create new binary object
        new_value = new_class()

        # process data
        new_value.sum = old_value.sum
        new_value.purpose = old_value.purpose
        new_value.recipient = old_value.recipient
        new_value.cashier_id = old_value.cashier_id
        new_value.expense_date = old_value.date
        new_value.report_date = date.today() if old_value.reported else None

        # replace data
        cache.put(key, new_value)

        # verify data
        verify = cache.get(key)
        print(verify)
        # ExpenseVoucherV2(
        #     purpose='Praesent eget fermentum massa',
        #     sum=Decimal('666.67'),
        #     recipient='John Doe',
        #     cashier_id=8,
        #     expense_date=datetime(2017, 9, 21, 0, 0),
        #     report_date=datetime(2018, 8, 29, 0, 0),
        #     version=1,
        # )


# migrate data
result = accounting.scan()
migrate(accounting, result, ExpenseVoucherV2)

# cleanup
accounting.destroy()
client.close()

At this moment all the fields, defined in both of our schemas, can be available in the resulting binary object, depending on which schema was used when writing it using put() or similar methods. Ignite Binary API do not have the method to delete Complex object schema; all the schemas ever defined will stay in cluster until its shutdown.

This versioning mechanism is quite simple and robust, but it have its limitations. The main thing is: you can not change the type of the existing field. If you try, you will be greeted with the following message:

`org.apache.ignite.binary.BinaryObjectException: Wrong value has been set [typeName=SomeType, fieldName=f1, fieldType=String, assignedValueType=int]`

As an alternative, you can rename the field or create a new Complex object.

Failover

File: failover.py.

When connection to the server is broken or timed out, Client object propagates an original exception (OSError or SocketError), but keeps its constructor’s parameters intact and tries to reconnect transparently.

When there’s no way for Client to reconnect, it raises a special ReconnectError exception.

The following example features a simple node list traversal failover mechanism. Gather 3 Ignite nodes on localhost into one cluster and run:

from pyignite import Client
from pyignite.datatypes.cache_config import CacheMode
from pyignite.datatypes.prop_codes import *
from pyignite.exceptions import SocketError


nodes = [
    ('127.0.0.1', 10800),
    ('127.0.0.1', 10801),
    ('127.0.0.1', 10802),
]

client = Client(timeout=4.0)
client.connect(nodes)
print('Connected to {}'.format(client))

my_cache = client.get_or_create_cache({
    PROP_NAME: 'my_cache',
    PROP_CACHE_MODE: CacheMode.REPLICATED,
})
my_cache.put('test_key', 0)

# abstract main loop
while True:
    try:
        # do the work
        test_value = my_cache.get('test_key')
        my_cache.put('test_key', test_value + 1)
    except (OSError, SocketError) as e:
        # recover from error (repeat last command, check data
        # consistency or just continue − depends on the task)
        print('Error: {}'.format(e))
        print('Last value: {}'.format(my_cache.get('test_key')))
        print('Reconnected to {}'.format(client))

Then try shutting down and restarting nodes, and see what happens.

# Connected to 127.0.0.1:10800
# Error: [Errno 104] Connection reset by peer
# Last value: 6999
# Reconnected to 127.0.0.1:10801
# Error: Socket connection broken.
# Last value: 12302
# Reconnected to 127.0.0.1:10802
# Error: [Errno 111] Client refused
# Traceback (most recent call last):
#     ...
# pyignite.exceptions.ReconnectError: Can not reconnect: out of nodes

Client reconnection do not require an explicit user action, like calling a special method or resetting a parameter. Note, however, that reconnection is lazy: it happens only if (and when) it is needed. In this example, the automatic reconnection happens, when the script checks upon the last saved value:

        print('Last value: {}'.format(my_cache.get('test_key')))

It means that instead of checking the connection status it is better for pyignite user to just try the supposed data operations and catch the resulting exception.

connect() method accepts any iterable, not just list. It means that you can implement any reconnection policy (round-robin, nodes prioritization, pause on reconnect or graceful backoff) with a generator.

pyignite comes with a sample RoundRobin generator. In the above example try to replace

client.connect(nodes)

with

client.connect(RoundRobin(nodes, max_reconnects=20))

The client will try to reconnect to node 1 after node 3 is crashed, then to node 2, et c. At least one node should be active for the RoundRobin to work properly.

SSL/TLS

There are some special requirements for testing SSL connectivity.

The Ignite server must be configured for securing the binary protocol port. The server configuration process can be split up into these basic steps:

  1. Create a key store and a trust store using Java keytool. When creating the trust store, you will probably need a client X.509 certificate. You will also need to export the server X.509 certificate to include in the client chain of trust.
  2. Turn on the SslContextFactory for your Ignite cluster according to this document: Securing Connection Between Nodes.
  3. Tell Ignite to encrypt data on its thin client port, using the settings for ClientConnectorConfiguration. If you only want to encrypt connection, not to validate client’s certificate, set sslClientAuth property to false. You’ll still have to set up the trust store on step 1 though.

Client SSL settings is summarized here: Client.

To use the SSL encryption without certificate validation just use_ssl.

from pyignite import Client

client = Client(use_ssl=True)
client.connect('127.0.0.1', 10800)

To identify the client, create an SSL keypair and a certificate with openssl command and use them in this manner:

from pyignite import Client

client = Client(
    use_ssl=True,
    ssl_keyfile='etc/.ssl/keyfile.key',
    ssl_certfile='etc/.ssl/certfile.crt',
)
client.connect('ignite-example.com', 10800)

To check the authenticity of the server, get the server certificate or certificate chain and provide its path in the ssl_ca_certfile parameter.

import ssl

from pyignite import Client

client = Client(
    use_ssl=True,
    ssl_ca_certfile='etc/.ssl/ca_certs',
    ssl_cert_reqs=ssl.CERT_REQUIRED,
)
client.connect('ignite-example.com', 10800)

You can also provide such parameters as the set of ciphers (ssl_ciphers) and the SSL version (ssl_version), if the defaults (ssl._DEFAULT_CIPHERS and TLS 1.1) do not suit you.

Password authentication

To authenticate you must set authenticationEnabled property to true and enable persistance in Ignite XML configuration file, as described in Authentication section of Ignite documentation.

Be advised that sending credentials over the open channel is greatly discouraged, since they can be easily intercepted. Supplying credentials automatically turns SSL on from the client side. It is highly recommended to secure the connection to the Ignite server, as described in SSL/TLS example, in order to use password authentication.

Then just supply username and password parameters to Client constructor.

from pyignite import Client

client = Client(username='ignite', password='ignite')
client.connect('ignite-example.com', 10800)

If you still do not wish to secure the connection is spite of the warning, then disable SSL explicitly on creating the client object:

client = Client(username='ignite', password='ignite', use_ssl=False)

Note, that it is not possible for Ignite thin client to obtain the cluster’s authentication settings through the binary protocol. Unexpected credentials are simply ignored by the server. In the opposite case, the user is greeted with the following message:

# pyignite.exceptions.HandshakeError: Handshake error: Unauthenticated sessions are prohibited.