API Reference¶
This part of the documentation covers all the interfaces of Aioinflux
Note
🚧 This section of the documentation is under writing and may be wrong/incomplete 🚧
Client Interface¶
-
class
aioinflux.client.
InfluxDBClient
(host='localhost', port=8086, mode='async', output='json', db=None, *, ssl=False, unix_socket=None, username=None, password=None, database=None, loop=None)[source]¶ -
ping
()[source]¶ Pings InfluxDB. Returns a dictionary containing the headers of the response from influxd.
Return type: dict
-
query
(q, *args, epoch='ns', chunked=False, chunk_size=None, db=None, parser=None, **kwargs)[source]¶ Sends a query to InfluxDB. Please refer to the InfluxDB documentation for all the possible queries: https://docs.influxdata.com/influxdb/latest/query_language/
Parameters: - q (
AnyStr
) – Raw query string - args – Positional arguments for query patterns
- db (
Optional
[str
]) – Database to be queried. Defaults to self.db. - epoch (
str
) – Precision level of response timestamps. Valid values:{'ns', 'u', 'µ', 'ms', 's', 'm', 'h'}
. - chunked (
bool
) – IfTrue
, makes InfluxDB return results in streamed batches rather than as a single response. Returns an AsyncGenerator which yields responses in the same format as non-chunked queries. - chunk_size (
Optional
[int
]) – Max number of points for each chunk. By default, InfluxDB chunks responses by series or by every 10,000 points, whichever occurs first. - kwargs – Keyword arguments for query patterns
- parser (
Optional
[Callable
]) – Optional parser function for ‘iterable’ mode
Return type: Union
[Asyncgenerator
[+T_co, -T_contra],dict
,bytes
,InfluxDBResult
,InfluxDBChunkedResult
]Returns: Returns an async generator if chunked is
True
, otherwise returns a dictionary containing the parsed JSON response.- q (
-
classmethod
set_query_pattern
(name, qp)[source]¶ Defines custom methods to provide quick access to commonly used query patterns. Query patterns are plain strings, with optional the named placed holders. Named placed holders are processed as keyword arguments in
str.format
. Positional arguments are also supported.Sample query pattern:
"SELECT mean(load) FROM cpu_stats WHERE host = '{host}' AND time > now() - {days}d"
Parameters: Return type: None
-
write
(data, measurement=None, db=None, precision=None, rp=None, tag_columns=None, **extra_tags)[source]¶ Writes data to InfluxDB. Input can be:
- a string properly formatted in InfluxDB’s line protocol
- a dictionary-like object containing four keys:
measurement
,time
,tags
,fields
- a Pandas DataFrame with a DatetimeIndex
- an iterable of one of above
Input data in formats 2-4 are parsed to the line protocol before being written to InfluxDB. See the InfluxDB docs for more details.
Parameters: - data (
Union
[AnyStr
,Mapping
[~KT, +VT_co],Iterable
[Union
[AnyStr
,Mapping
[~KT, +VT_co]]]]) – Input data (see description above). - measurement (
Optional
[str
]) – Measurement name. Mandatory when when writing DataFrames only. When writing dictionary-like data, this field is treated as the default value for points that do not contain a measurement field. - db (
Optional
[str
]) – Database to be written to. Defaults to self.db. - precision (
Optional
[str
]) – Sets the precision for the supplied Unix time values. Ignored if input timestamp data is of non-integer type. Valid values:{'ns', 'u', 'µ', 'ms', 's', 'm', 'h'}
- rp (
Optional
[str
]) – Sets the target retention policy for the write. If unspecified, data is written to the default retention policy. - tag_columns (
Optional
[Iterable
[+T_co]]) – Columns to be treated as tags (used when writing DataFrames only) - extra_tags – Additional tags to be added to all points passed.
Return type: Returns: Returns True if insert is successful. Raises ValueError exception otherwise.
-
Serialization¶
-
aioinflux.serialization.common.
escape
(string, escape_pattern)[source]¶ Assistant function for string escaping
-
class
aioinflux.serialization.datapoint.
DataPoint
[source]¶ Base class for dynamically generated datapoint class
-
aioinflux.serialization.datapoint.
datapoint
(schema=None, name='DataPoint', *, rm_none=False, fill_none=False, extra_tags=None)[source]¶ Dynamic datapoint class factory
Can be used as a decorator (similar to Python 3.7
dataclasses
) or as a function (similar tonamedtuple()
, but mutable).Main characteristics:
Supports accessing field values by attribute or subscription
Support dict-like iteration via
items
methodBuilt-in serialization to InfluxDB line protocol through the
to_lineprotocol
method.About 2-3x faster serialization than the
serialization.mapping
module.- Difference gets smaller (1x-1.5x) when
rm_none=True
or when the number of fields/tags is very large (20+).
- Difference gets smaller (1x-1.5x) when
Parameters: - schema – Dictionary-based (functional namedtuple style) or @dataclass decorator-based (dataclass style) measurement schema
- name – Class name (used when passing schema dictionaries only)
- rm_none – Whether apply a regex to remove
None
values from. IfFalse
, passingNone
values to boolean, integer or float or time fields will result in write errors. Setting toTrue
is “safer” but impacts performance. - fill_none – Whether or not to set missing fields to
None
. Likely best used together withrm_none=True
. - extra_tags – Hard coded tags to be added to every point generated.