API Reference

Client Interface

class aioinflux.client.InfluxDBClient(host='localhost', port=8086, path='/', mode='async', output='json', db=None, database=None, ssl=False, *, unix_socket=None, username=None, password=None, timeout=None, loop=None, **kwargs)[source]
__init__(host='localhost', port=8086, path='/', mode='async', output='json', db=None, database=None, ssl=False, *, unix_socket=None, username=None, password=None, timeout=None, loop=None, **kwargs)[source]

InfluxDBClient holds information necessary to interact with InfluxDB. It is async by default, but can also be used as a sync/blocking client. When querying, responses are returned as parsed JSON by default, but can also be wrapped in easily iterable wrapper object or be parsed to Pandas DataFrames. The three main public methods are the three endpoints of the InfluxDB API, namely:

  1. ping()

  2. write()

  3. query()

See each of the above methods documentation for further usage details.

See also: https://docs.influxdata.com/influxdb/latest/tools/api/

Parameters
  • host (str) – Hostname to connect to InfluxDB.

  • port (int) – Port to connect to InfluxDB.

  • path (str) – Path to connect to InfluxDB.

  • mode (str) –

    Mode in which client should run. Available options:

    • async: Default mode. Each query/request to the backend will

    • blocking: Behaves in sync/blocking fashion, similar to the official InfluxDB-Python client.

  • output (str) –

    Output format of the response received from InfluxDB.

    • json: Default format. Returns parsed JSON as received from InfluxDB.

    • dataframe: Parses results into :py:class`pandas.DataFrame`. Not compatible with chunked responses.

  • db (Optional[str]) – Default database to be used by the client.

  • ssl (bool) – If https should be used.

  • unix_socket (Optional[str]) – Path to the InfluxDB Unix domain socket.

  • username (Optional[str]) – Username to use to connect to InfluxDB.

  • password (Optional[str]) – User password.

  • timeout (Union[ClientTimeout, float, None]) – Timeout in seconds or aiohttp.ClientTimeout object

  • database (Optional[str]) – Default database to be used by the client. This field is for argument consistency with the official InfluxDB Python client.

  • loop (Optional[AbstractEventLoop]) – Asyncio event loop.

  • kwargs – Additional kwargs for aiohttp.ClientSession

async create_session(**kwargs)[source]

Creates an aiohttp.ClientSession

Override this or call it with kwargs to use other aiohttp functionality not covered by __init__

ping()[source]

Pings InfluxDB

Returns a dictionary containing the headers of the response from influxd.

Return type

dict

query(q, *, epoch='ns', chunked=False, chunk_size=None, db=None)[source]

Sends a query to InfluxDB. Please refer to the InfluxDB documentation for all the possible queries: https://docs.influxdata.com/influxdb/latest/query_language/

Parameters
  • q (AnyStr) – Raw query string

  • db (Optional[str]) – Database to be queried. Defaults to self.db.

  • epoch (str) – Precision level of response timestamps. Valid values: {'ns', 'u', 'µ', 'ms', 's', 'm', 'h'}.

  • chunked (bool) – If True, makes InfluxDB return results in streamed batches rather than as a single response. Returns an AsyncGenerator which yields responses in the same format as non-chunked queries.

  • chunk_size (Optional[int]) – Max number of points for each chunk. By default, InfluxDB chunks responses by series or by every 10,000 points, whichever occurs first.

Return type

Union[AsyncGenerator[~ResultType, None], ~ResultType]

Returns

Response in the format specified by the combination of InfluxDBClient.output and chunked

write(data, measurement=None, db=None, precision=None, rp=None, tag_columns=None, **extra_tags)[source]

Writes data to InfluxDB. Input can be:

  1. A mapping (e.g. dict) containing the keys: measurement, time, tags, fields

  2. A Pandas DataFrame with a DatetimeIndex

  3. A user defined class decorated w/

    lineprotocol()

  4. A string (str or bytes) properly formatted in InfluxDB’s line protocol

  5. An iterable of one of the above

Input data in formats 1-3 are parsed to the line protocol before being written to InfluxDB. See the InfluxDB docs for more details.

Parameters
  • data (Union[~PointType, Iterable[~PointType]]) – Input data (see description above).

  • measurement (Optional[str]) – Measurement name. Mandatory when when writing DataFrames only. When writing dictionary-like data, this field is treated as the default value for points that do not contain a measurement field.

  • db (Optional[str]) – Database to be written to. Defaults to self.db.

  • precision (Optional[str]) – Sets the precision for the supplied Unix time values. Ignored if input timestamp data is of non-integer type. Valid values: {'ns', 'u', 'µ', 'ms', 's', 'm', 'h'}

  • rp (Optional[str]) – Sets the target retention policy for the write. If unspecified, data is written to the default retention policy.

  • tag_columns (Optional[Iterable]) – Columns to be treated as tags (used when writing DataFrames only)

  • extra_tags – Additional tags to be added to all points passed. Valid when writing DataFrames or mappings only. Silently ignored for user-defined classes and raw lineprotocol

Return type

bool

Returns

Returns True if insert is successful. Raises ValueError otherwise.

exception aioinflux.client.InfluxDBError[source]

Raised when an server-side error occurs

exception aioinflux.client.InfluxDBWriteError(resp)[source]

Raised when a server-side writing error occurs

Result iteration

aioinflux.iterutils.iterpoints(resp, parser=None)[source]

Iterates a response JSON yielding data point by point.

Can be used with both regular and chunked responses. By default, returns just a plain list of values representing each point, without column names, or other metadata.

In case a specific format is needed, an optional parser argument can be passed. parser is a function/callable that takes data point values and, optionally, a meta parameter containing which takes a dictionary containing all or a subset of the following: {'columns', 'name', 'tags', 'statement_id'}.

Sample parser functions:

# Function optional meta argument
def parser(*x, meta):
    return dict(zip(meta['columns'], x))

# Namedtuple (callable)
from collections import namedtuple
parser = namedtuple('MyPoint', ['col1', 'col2', 'col3'])
Parameters
  • resp (dict) – Dictionary containing parsed JSON (output from InfluxDBClient.query)

  • parser (Optional[Callable]) – Optional parser function/callable

Return type

Generator

Returns

Generator object

Serialization

Mapping

aioinflux.serialization.mapping.serialize(point, measurement=None, **extra_tags)[source]

Converts dictionary-like data into a single line protocol line (point)

Return type

bytes

Dataframe

User-defined classes

exception aioinflux.serialization.usertype.SchemaError[source]

Raised when invalid schema is passed to lineprotocol()

aioinflux.serialization.usertype.lineprotocol(cls=None, *, schema=None, rm_none=False, extra_tags=None, placeholder=False)[source]

Adds to_lineprotocol method to arbitrary user-defined classes

Parameters
  • cls – Class to monkey-patch

  • schema (Optional[Mapping[str, type]]) – Schema dictionary (attr/type pairs).

  • rm_none (bool) – Whether apply a regex to remove None values. If False, passing None values to boolean, integer or float or time fields will result in write errors. Setting to True is “safer” but impacts performance.

  • extra_tags (Optional[Mapping[str, str]]) – Hard coded tags to be added to every point generated.

  • placeholder (bool) – If no field attributes are present, add a placeholder attribute (_) which is always equal to True. This is a workaround for creating field-less points (which is not supported natively by InfluxDB)