HTTP Client Reference

All commands are accessed through the ipfshttpclient.Client class.

Exceptions

The class hierachy for exceptions is:

Error
 ├── VersionMismatch
 ├── AddressError
 ├── EncoderError
 │    ├── EncoderMissingError
 │    ├── EncodingError
 │    └── DecodingError
 └── CommunicationError
      ├── ProtocolError
      ├── StatusError
      ├── ErrorResponse
      │    └── PartialErrorResponse
      ├── ConnectionError
      └── TimeoutError
exception ipfshttpclient.exceptions.AddressError(addr)[source]

Raised when the provided daemon location Multiaddr does not match any of the supported patterns.

exception ipfshttpclient.exceptions.CommunicationError(original, _message=None)[source]

Base class for all network communication related errors.

exception ipfshttpclient.exceptions.ConnectionError(original, _message=None)[source]

Raised when connecting to the service has failed on the socket layer.

exception ipfshttpclient.exceptions.DecodingError(encoder_name, original)[source]

Raised when decoding a byte string to a Python object has failed due to some problem with the input data.

exception ipfshttpclient.exceptions.EncoderError(message, encoder_name)[source]

Base class for all encoding and decoding related errors.

exception ipfshttpclient.exceptions.EncoderMissingError(encoder_name)[source]

Raised when a requested encoder class does not actually exist.

exception ipfshttpclient.exceptions.EncodingError(encoder_name, original)[source]

Raised when encoding a Python object into a byte string has failed due to some problem with the input data.

exception ipfshttpclient.exceptions.Error[source]

Base class for all exceptions in this module.

exception ipfshttpclient.exceptions.ErrorResponse(message, original)[source]

Raised when the daemon has responded with an error message because the requested operation could not be carried out.

exception ipfshttpclient.exceptions.PartialErrorResponse(message, original=None)[source]

Raised when the daemon has responded with an error message after having already returned some data.

exception ipfshttpclient.exceptions.ProtocolError(original, _message=None)[source]

Raised when parsing the response from the daemon has failed.

This can most likely occur if the service on the remote end isn’t in fact an IPFS daemon.

exception ipfshttpclient.exceptions.StatusError(original, _message=None)[source]

Raised when the daemon responds with an error to our request.

exception ipfshttpclient.exceptions.TimeoutError(original, _message=None)[source]

Raised when the daemon didn’t respond in time.

exception ipfshttpclient.exceptions.VersionMismatch(current, minimum, maximum)[source]

Raised when daemon version is not supported by this client version.

Utility Functions

ipfshttpclient.DEFAULT_ADDR

The default IPFS API daemon location the client library will attempt to connect to. By default this will have a value of multiaddr.Multiaddr("/dns/localhost/tcp/5001/http").

This may be overwritten on a per-client-instance basis using the addr parameter of the connect() function.

ipfshttpclient.DEFAULT_BASE

The default HTTP URL path prefix (or “base”) that the client library will use. By default this will have a value of "api/v0".

This may be overwritten on a per-client-instance basis using the base parameter of the connect() function.

ipfshttpclient.connect(addr=<Multiaddr /dns/localhost/tcp/5001/http>, base='api/v0', *, chunk_size=8192, offline=False, session=False, auth=None, cookies=None, headers={}, timeout=120, username=None, password=None)[source]

Create a new Client instance and connect to the daemon to validate that its version is supported as well as applying any known workarounds for the given daemon version

Raises

All parameters are identical to those passed to the constructor of the Client class.

ipfshttpclient.assert_version(version, minimum='0.4.22', maximum='0.7.0', blacklist=[])[source]

Make sure that the given daemon version is supported by this client version.

Raises

VersionMismatch

Parameters
  • version (str) – The actual version of an IPFS daemon

  • minimum (str) – The minimal IPFS daemon version allowed (inclusive)

  • maximum (str) – The maximum IPFS daemon version allowed (exclusive)

  • blacklist (Iterable[str]) – Versions explicitly disallowed even if in range minimummaximum

Return type

None

The API Client

All methods accept the following parameters in their kwargs:

  • offline (bool) – Prevent the deamon from communicating with any remote IPFS node while performing the requested action?

  • opts (dict) – A mapping of custom IPFS API parameters to be sent along with the regular parameters generated by the client library

    • Values specified here will always override their respective counterparts of the client library itself.

  • stream (bool) – Return results incrementally as they arrive?

    • Each method called with stream=True will return a generator instead of the documented value. If the return type is of type list then each item of the given list will be yielded separately; if it is of type bytes then arbitrary bags of bytes will be yielded that together form a stream; finally, if it is of type dict then the single dictonary item will be yielded once.

  • timeout (float) – The number of seconds to wait of a daemon reply before giving up

class ipfshttpclient.Client[source]

The main IPFS HTTP client class

Allows access to an IPFS daemon instance using its HTTP API by exposing an IPFS Interface Core compatible set of methods.

It is possible to instantiate this class directly, using the same parameters as connect(), to prevent the client from checking for an active and compatible version of the daemon. In general however, calling connect() should be preferred.

In order to reduce latency between individual API calls, this class may keep a pool of TCP connections between this client and the API daemon open between requests. The only caveat of this is that the client object should be closed when it is not used anymore to prevent resource leaks.

The easiest way of using this “session management” facility is using a context manager:

with ipfshttpclient.connect() as client:
        print(client.version())  # These calls…
        print(client.version())  # …will reuse their TCP connection

A client object may be re-opened several times:

client = ipfshttpclient.connect()
print(client.version())  # Perform API call on separate TCP connection
with client:
        print(client.version())  # These calls…
        print(client.version())  # …will share a TCP connection
with client:
        print(client.version())  # These calls…
        print(client.version())  # …will share a different TCP connection

When storing a long-running Client object use it like this:

class Consumer:
        def __init__(self):
                self._client = ipfshttpclient.connect(session=True)

        # … other code …

        def close(self):  # Call this when you're done
                self._client.close()
Parameters
  • addr

    The Multiaddr describing the API daemon location, as used in the API key of go-ipfs Addresses section

    Supported addressing patterns are currently:

    • /{dns,dns4,dns6,ip4,ip6}/<host>/tcp/<port> (HTTP)

    • /{dns,dns4,dns6,ip4,ip6}/<host>/tcp/<port>/http (HTTP)

    • /{dns,dns4,dns6,ip4,ip6}/<host>/tcp/<port>/https (HTTPS)

    Additional forms (proxying) may be supported in the future.

  • base – The HTTP URL path prefix (or “base”) at which the API is exposed on the API daemon

  • chunk_size – The size of data chunks passed to the operating system when uploading files or text/binary content

  • offline – Ask daemon to operate in “offline mode” – that is, it should not consult the network when unable to find resources locally, but fail instead

  • session – Create this Client instance with a session already open? (Useful for long-running client objects.)

  • auth – HTTP basic authentication (username, password) tuple to send along with each request to the API daemon

  • cookies – HTTP cookies to send along with each request to the API daemon

  • headers – Custom HTTP headers to send along with each request to the API daemon

  • timeout

    Connection timeout (in seconds) when connecting to the API daemon

    If a tuple is passed its contents will be interpreted as the values for the connecting and receiving phases respectively, otherwise the value will apply to both phases.

    The default value is implementation-defined. A value of math.inf disables the respective timeout.

class bitswap
stat(**kwargs)

Returns some diagnostic information from the bitswap agent

>>> client.bitswap.stat()
{'BlocksReceived': 96,
 'DupBlksReceived': 73,
 'DupDataReceived': 2560601,
 'ProviderBufLen': 0,
 'Peers': [
        'QmNZFQRxt9RMNm2VVtuV2Qx7q69bcMWRVXmr5CEkJEgJJP',
        'QmNfCubGpwYZAQxX8LQDsYgB48C4GbfZHuYdexpX9mbNyT',
        'QmNfnZ8SCs3jAtNPc8kf3WJqJqSoX7wsX7VqkLdEYMao4u',

 ],
 'Wantlist': [
        'QmeV6C6XVt1wf7V7as7Yak3mxPma8jzpqyhtRtCvpKcfBb',
        'QmdCWFLDXqgdWQY9kVubbEHBbkieKd3uo7MtCm7nTZZE9K',
        'QmVQ1XvYGF19X4eJqz1s7FJYJqAxFC4oqh3vWJJEXn66cp'
 ]
}
Returns

dict – Statistics, peers and wanted blocks

wantlist(peer=None, **kwargs)

Returns blocks currently on the bitswap wantlist

>>> client.bitswap.wantlist()
{'Keys': [
        'QmeV6C6XVt1wf7V7as7Yak3mxPma8jzpqyhtRtCvpKcfBb',
        'QmdCWFLDXqgdWQY9kVubbEHBbkieKd3uo7MtCm7nTZZE9K',
        'QmVQ1XvYGF19X4eJqz1s7FJYJqAxFC4oqh3vWJJEXn66cp'
]}
Parameters

peer (Optional[str]) – Peer to show wantlist for

Returns

dict

Keys

List of blocks the connected daemon is looking for

property chunk_size
Return type

int

class block

Interacting with raw IPFS blocks

get(cid, **kwargs)

Returns the raw contents of a block

>>> client.block.get('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
b'\x121\n"\x12 \xdaW>\x14\xe5\xc1\xf6\xe4\x92\xd1 … \n\x02\x08\x01'
Parameters

cid (str) – The CID of an existing block to get

Returns

bytes – Contents of the requested block

put(file, **kwargs)

Stores the contents of the given file object as an IPFS block

>>> client.block.put(io.BytesIO(b'Mary had a little lamb'))
        {'Key':  'QmeV6C6XVt1wf7V7as7Yak3mxPma8jzpqyhtRtCvpKcfBb',
         'Size': 22}
Parameters

file (Union[str, PathLike, bytes, IO[bytes], int]) – The data to be stored as an IPFS block

Returns

dict – Information about the new block

See stat()

stat(cid, **kwargs)

Returns a dict with the size of the block with the given hash.

>>> client.block.stat('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
{'Key':  'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D',
 'Size': 258}
Parameters

cid (str) – The CID of an existing block to stat

Returns

dict – Information about the requested block

property chunk_size
Return type

int

class bootstrap
add(peer, *peers, **kwargs)

Adds peers to the bootstrap list

Parameters

peer (Union[str, Multiaddr]) – IPFS Multiaddr of a peer to add to the list

Returns

dict

list(**kwargs)

Returns the addresses of peers used during initial discovery of the IPFS network

Peers are output in the format <multiaddr>/<peerID>.

>>> client.bootstrap.list()
{'Peers': [
        '/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYER … uvuJ',
        '/ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRa … ca9z',
        '/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKD … KrGM',

        '/ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3p … QBU3'
]}
Returns

dict

Peers

List of known bootstrap peers

rm(peer, *peers, **kwargs)

Removes peers from the bootstrap list

Parameters

peer (Union[str, Multiaddr]) – IPFS Multiaddr of a peer to remove from the list

Returns

dict

property chunk_size
Return type

int

class config
get(**kwargs)

Returns the currently used node configuration

>>> config = client.config.get()
>>> config['Addresses']
{'API': '/ip4/127.0.0.1/tcp/5001',
 'Gateway': '/ip4/127.0.0.1/tcp/8080',
 'Swarm': ['/ip4/0.0.0.0/tcp/4001', '/ip6/::/tcp/4001']},
>>> config['Discovery']
{'MDNS': {'Enabled': True, 'Interval': 10}}
Returns

dict – The entire IPFS daemon configuration

replace(config, **kwargs)

Replaces the existing configuration with a new configuration tree

Make sure to back up the config file first if neccessary, as this operation can not be undone.

set(key, value=None, **kwargs)

Adds or replaces a single configuration value

>>> client.config.set("Addresses.Gateway")
{'Key': 'Addresses.Gateway', 'Value': '/ip4/127.0.0.1/tcp/8080'}
>>> client.config.set("Addresses.Gateway", "/ip4/127.0.0.1/tcp/8081")
{'Key': 'Addresses.Gateway', 'Value': '/ip4/127.0.0.1/tcp/8081'}
Parameters
Returns

dict

Key

The requested configuration key

Value

The new value of the this configuration key

property chunk_size
Return type

int

class dag
export(cid, **kwargs)

Exports a DAG into a .car file format

>>> data = client.dag.export('bafyreidepjmjhvhlvp5eyxqpmyyi7rxwvl7wsglwai3cnvq63komq4tdya')

Note: When exporting larger DAG structures, remember that you can set the stream parameter to True on any method to have it return results incrementally.

Parameters

cid (str) – Key of the object to export, in CID format

Returns

bytes – DAG in a .car format

get(cid, **kwargs)

Retrieves the contents of a DAG node

>>> client.dag.get('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
{'Data': '',
 'Links': [
        {'Hash': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV',
         'Name': 'Makefile',          'Size': 174},
        {'Hash': 'QmeKozNssnkJ4NcyRidYgDY2jfRZqVEoRGfipkgath71bX',
         'Name': 'example',           'Size': 1474},
        {'Hash': 'QmZAL3oHMQYqsV61tGvoAVtQLs1WzRe1zkkamv9qxqnDuK',
         'Name': 'home',              'Size': 3947},
        {'Hash': 'QmZNPyKVriMsZwJSNXeQtVQSNU4v4KEKGUQaMT61LPahso',
         'Name': 'lib',               'Size': 268261},
        {'Hash': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTiYwKir8eXJY',
         'Name': 'published-version', 'Size': 55}
]}
Parameters

cid (str) – Key of the object to retrieve, in CID format

Returns

dict – Cid with the address of the dag object

imprt(data, **kwargs)

Imports a .car file with a DAG into IPFS

>>> with open('data.car', 'rb') as file
...     client.dag.imprt(file)
{'Root': {
                'Cid': {
                        '/': 'bafyreidepjmjhvhlvp5eyxqpmyyi7rxwvl7wsglwai3cnvq63komq4tdya'
                }
        }
}

Note: This method is named .imprt (rather than .import) to avoid causing a Python SyntaxError due to import being global keyword in Python.

Parameters

data (Union[str, PathLike, bytes, IO[bytes], int]) – IO stream object with data that should be imported

Returns

dict – Dictionary with the root CID of the DAG imported

put(data, **kwargs)

Decodes the given input file as a DAG object and returns their key

>>> client.dag.put(io.BytesIO(b'''
...       {
...           "Data": "another",
...           "Links": [ {
...               "Name": "some link",
...               "Hash": "QmXg9Pp2ytZ14xgmQjYEiHjVjMFXzCV … R39V",
...               "Size": 8
...           } ]
...       }'''))
{'Cid': {
                '/': 'bafyreifgjgbmtykld2e3yncey3naek5xad3h4m2pxmo3of376qxh54qk34'
        }
}
Parameters

data (Union[str, PathLike, bytes, IO[bytes], int]) – IO stream object of path to a file containing the data to put

Returns

dict – Cid with the address of the dag object

resolve(cid, **kwargs)

Resolves a DAG node from its CID, returning its address and remaining path

>>> client.dag.resolve('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
{'Cid': {
                '/': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D'
        }
}
Parameters

cid (str) – Key of the object to resolve, in CID format

Returns

dict – Cid with the address of the dag object

property chunk_size
Return type

int

class dht
findpeer(peer_id, *peer_ids, **kwargs)

Queries the DHT for all of the associated multiaddresses

>>> client.dht.findpeer("QmaxqKpiYNr62uSFBhxJAMmEMkT6dvc3oHkrZN … MTLZ")
[{'ID': 'QmfVGMFrwW6AV6fTWmD6eocaTybffqAvkVLXQEFrYdk6yc',
  'Extra': '', 'Type': 6, 'Responses': None},
 {'ID': 'QmTKiUdjbRjeN9yPhNhG1X38YNuBdjeiV9JXYWzCAJ4mj5',
  'Extra': '', 'Type': 6, 'Responses': None},
 {'ID': 'QmTGkgHSsULk8p3AKTAqKixxidZQXFyF7mCURcutPqrwjQ',
  'Extra': '', 'Type': 6, 'Responses': None},

 {'ID': '', 'Extra': '', 'Type': 2,
  'Responses': [
        {'ID': 'QmaxqKpiYNr62uSFBhxJAMmEMkT6dvc3oHkrZNpH2VMTLZ',
         'Addrs': [
                '/ip4/10.9.8.1/tcp/4001',
                '/ip6/::1/tcp/4001',
                '/ip4/164.132.197.107/tcp/4001',
                '/ip4/127.0.0.1/tcp/4001']}
  ]}]
Parameters

peer_id (str) – The ID of the peer to search for

Returns

dict – List of multiaddrs

findprovs(cid, *cids, **kwargs)

Finds peers in the DHT that can provide a specific value

>>> client.dht.findprovs("QmNPXDC6wTXVmZ9Uoc8X1oqxRRJr4f1sDuyQu … mpW2")
[{'ID': 'QmaxqKpiYNr62uSFBhxJAMmEMkT6dvc3oHkrZNpH2VMTLZ',
  'Extra': '', 'Type': 6, 'Responses': None},
 {'ID': 'QmaK6Aj5WXkfnWGoWq7V8pGUYzcHPZp4jKQ5JtmRvSzQGk',
  'Extra': '', 'Type': 6, 'Responses': None},
 {'ID': 'QmdUdLu8dNvr4MVW1iWXxKoQrbG6y1vAVWPdkeGK4xppds',
  'Extra': '', 'Type': 6, 'Responses': None},

 {'ID': '', 'Extra': '', 'Type': 4, 'Responses': [
        {'ID': 'QmVgNoP89mzpgEAAqK8owYoDEyB97Mk … E9Uc', 'Addrs': None}
  ]},
 {'ID': 'QmaxqKpiYNr62uSFBhxJAMmEMkT6dvc3oHkrZNpH2VMTLZ',
  'Extra': '', 'Type': 1, 'Responses': [
        {'ID': 'QmSHXfsmN3ZduwFDjeqBn1C8b1tcLkxK6yd … waXw', 'Addrs': [
                '/ip4/127.0.0.1/tcp/4001',
                '/ip4/172.17.0.8/tcp/4001',
                '/ip6/::1/tcp/4001',
                '/ip4/52.32.109.74/tcp/1028'
          ]}
  ]}]
Parameters

cid (str) – The DHT key to find providers for

Returns

dict – List of provider Peer IDs

get(key, *keys, **kwargs)

Queries the DHT for its best value related to given key

There may be several different values for a given key stored in the DHT; in this context best means the record that is most desirable. There is no one metric for best: it depends entirely on the key type. For IPNS, best is the record that is both valid and has the highest sequence number (freshest). Different key types may specify other rules for what they consider to be the best.

Parameters

key (str) – One or more keys whose values should be looked up

Returns

str

put(key, value, **kwargs)

Writes a key/value pair to the DHT

Given a key of the form /foo/bar and a value of any form, this will write that value to the DHT with that key.

Keys have two parts: a keytype (foo) and the key name (bar). IPNS uses the /ipns/ keytype, and expects the key name to be a Peer ID. IPNS entries are formatted with a special strucutre.

You may only use keytypes that are supported in your ipfs binary: go-ipfs currently only supports the /ipns/ keytype. Unless you have a relatively deep understanding of the key’s internal structure, you likely want to be using the name_publish() instead.

Value is arbitrary text.

>>> client.dht.put("QmVgNoP89mzpgEAAqK8owYoDEyB97Mkc … E9Uc", "test123")
[{'ID': 'QmfLy2aqbhU1RqZnGQyqHSovV8tDufLUaPfN1LNtg5CvDZ',
  'Extra': '', 'Type': 5, 'Responses': None},
 {'ID': 'QmZ5qTkNvvZ5eFq9T4dcCEK7kX8L7iysYEpvQmij9vokGE',
  'Extra': '', 'Type': 5, 'Responses': None},
 {'ID': 'QmYqa6QHCbe6eKiiW6YoThU5yBy8c3eQzpiuW22SgVWSB8',
  'Extra': '', 'Type': 6, 'Responses': None},

 {'ID': 'QmP6TAKVDCziLmx9NV8QGekwtf7ZMuJnmbeHMjcfoZbRMd',
  'Extra': '', 'Type': 1, 'Responses': []}]
Parameters
  • key (str) – A unique identifier

  • value (str) – Abitrary text to associate with the input (2048 bytes or less)

Returns

list

query(peer_id, *peer_ids, **kwargs)

Finds the closest Peer IDs to a given Peer ID by querying the DHT.

>>> client.dht.query("/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDM … uvuJ")
[{'ID': 'QmPkFbxAQ7DeKD5VGSh9HQrdS574pyNzDmxJeGrRJxoucF',
  'Extra': '', 'Type': 2, 'Responses': None},
 {'ID': 'QmR1MhHVLJSLt9ZthsNNhudb1ny1WdhY4FPW21ZYFWec4f',
  'Extra': '', 'Type': 2, 'Responses': None},
 {'ID': 'Qmcwx1K5aVme45ab6NYWb52K2TFBeABgCLccC7ntUeDsAs',
  'Extra': '', 'Type': 2, 'Responses': None},

 {'ID': 'QmYYy8L3YD1nsF4xtt4xmsc14yqvAAnKksjo3F3iZs5jPv',
  'Extra': '', 'Type': 1, 'Responses': []}]
Parameters

peer_id (str) – The peerID to run the query against

Returns

dict – List of peers IDs

property chunk_size
Return type

int

class files

Manage files in IPFS’s virtual “Mutable File System” (MFS) file storage space

cp(source, dest, **kwargs)

Creates a copy of a file within the MFS

Due to the nature of IPFS this will not actually involve any copying of the file’s content. Instead, a new link will be added to the directory containing dest referencing the CID of source – this is very similar to how hard links to read-only files work in classical filesystems.

>>> client.files.ls("/")
{'Entries': [
        {'Size': 0, 'Hash': '', 'Name': 'Software', 'Type': 0},
        {'Size': 0, 'Hash': '', 'Name': 'test', 'Type': 0}
]}
>>> client.files.cp("/test", "/bla")
>>> client.files.ls("/")
{'Entries': [
        {'Size': 0, 'Hash': '', 'Name': 'Software', 'Type': 0},
        {'Size': 0, 'Hash': '', 'Name': 'bla', 'Type': 0},
        {'Size': 0, 'Hash': '', 'Name': 'test', 'Type': 0}
]}
Parameters
  • source (str) – Filepath within the MFS to copy from

  • dest (str) – Destination filepath within the MFS to which the file will be copied/linked to

ls(path, **kwargs)

Lists contents of a directory in the MFS

>>> client.files.ls("/")
{'Entries': [
        {'Size': 0, 'Hash': '', 'Name': 'Software', 'Type': 0}
]}
Parameters

path (str) – Filepath within the MFS

Returns

dict

Entries

List of files in the given MFS directory

mkdir(path, parents=False, **kwargs)

Creates a directory within the MFS

>>> client.files.mkdir("/test")
Parameters
  • path (str) – Filepath within the MFS

  • parents (bool) – Create parent directories as needed and do not raise an exception if the requested directory already exists

mv(source, dest, **kwargs)

Moves files and directories within the MFS

>>> client.files.mv("/test/file", "/bla/file")
Parameters
  • source (str) – Existing filepath within the MFS

  • dest (str) – Destination to which the file will be moved in the MFS

read(path, offset=0, count=None, **kwargs)

Reads a file stored in the MFS

>>> client.files.read("/bla/file")
b'hi'
Parameters
  • path (str) – Filepath within the MFS

  • offset (int) – Byte offset at which to begin reading at

  • count (Optional[int]) – Maximum number of bytes to read (default is the entire remaining length)

Returns

bytes (MFS file contents)

rm(path, recursive=False, **kwargs)

Removes a file from the MFS

Note that the file’s contents will not actually be removed from the IPFS node until the next repository GC run. If it is important to have the file’s contents erased from the node this may be done manually by calling :meth`~ipfshttpclient.Client.repo.gc` at a time of convenience.

>>> client.files.rm("/bla/file")
Parameters
  • path (str) – Filepath within the MFS

  • recursive (bool) – Recursively remove directories?

stat(path, **kwargs)

Returns basic stat information for an MFS file (including its hash)

>>> client.files.stat("/test")
{'Hash': 'QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn',
 'Size': 0, 'CumulativeSize': 4, 'Type': 'directory', 'Blocks': 0}
Parameters

path (str) – Filepath within the MFS

Returns

dict (MFS file information)

write(path, file, offset=0, create=False, truncate=False, count=None, **kwargs)

Writes a file into the MFS

>>> client.files.write("/test/file", io.BytesIO(b"hi"), create=True)
Parameters
  • path (str) – Filepath within the MFS

  • file (Union[str, PathLike, bytes, IO[bytes], int]) – IO stream object with data that should be written

  • offset (int) – Byte offset at which to begin writing at

  • create (bool) – Create the file if it does not exist

  • truncate (bool) – Truncate the file to size zero before writing

  • count (Optional[int]) – Maximum number of bytes to read from the source file

property chunk_size
Return type

int

class key
gen(key_name, type, size=2048, **kwargs)

Adds a new public key that can be used for publish()

>>> client.key.gen('example_key_name')
{'Name': 'example_key_name',
 'Id': 'QmQLaT5ZrCfSkXTH6rUKtVidcxj8jrW3X2h75Lug1AV7g8'}
Parameters
  • key_name (str) – Name of the new Key to be generated. Used to reference the Keys.

  • type (str) –

    Type of key to generate. The current possible keys types are:

    • "rsa"

    • "ed25519"

  • size (int) – Bitsize of key to generate

Returns

dict

Name

The name of the newly generated key

Id

The key ID/fingerprint of the newly generated key

list(**kwargs)

Returns a list of all available IPNS keys

>>> client.key.list()
{'Keys': [
        {'Name': 'self',
         'Id': 'QmQf22bZar3WKmojipms22PkXH1MZGmvsqzQtuSvQE3uhm'},
        {'Name': 'example_key_name',
         'Id': 'QmQLaT5ZrCfSkXTH6rUKtVidcxj8jrW3X2h75Lug1AV7g8'}
]}
Returns

dict

Keys

List of dictionaries with Names and Ids of public keys

rename(key_name, new_key_name, **kwargs)

Rename an existing key

>>> client.key.rename("bla", "personal")
{"Was": "bla",
 "Now": "personal",
 "Id": "QmeyrRNxXaasZaoDXcCZgryoBCga9shaHQ4suHAYXbNZF3",
 "Overwrite": False}
Parameters
  • key_name (str) – Current name of the key to rename

  • new_key_name (str) – New name of the key

Returns

dict – Information about the key renameal

rm(key_name, *key_names, **kwargs)

Removes one or more keys

>>> client.key.rm("bla")
{"Keys": [
        {"Name": "bla",
         "Id": "QmfJpR6paB6h891y7SYXGe6gapyNgepBeAYMbyejWA4FWA"}
]}
Parameters

key_name (str) – Name of the key(s) to remove.

Returns

dict

Keys

List of key names and IDs that have been removed

property chunk_size
Return type

int

class name
publish(ipfs_path, resolve=True, lifetime='24h', ttl=None, key=None, allow_offline=False, **kwargs)

Publishes an object to IPNS

IPNS is a PKI namespace, where names are the hashes of public keys, and the private key enables publishing new (signed) values. In publish, the default value of name is your own identity public key.

>>> client.name.publish('/ipfs/QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZK … GZ5d')
{'Value': '/ipfs/QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d',
 'Name': 'QmVgNoP89mzpgEAAqK8owYoDEyB97MkcGvoWZir8otE9Uc'}
Parameters
  • ipfs_path (str) – IPFS path of the object to be published

  • allow_offline (bool) – When offline, save the IPNS record to the the local datastore without broadcasting to the network instead of simply failing.

  • lifetime (Union[str, int]) –

    Time duration that the record will be valid for

    Accepts durations such as "300s", "1.5h" or "2h45m". Valid units are:

    • "ns"

    • "us" (or "µs")

    • "ms"

    • "s"

    • "m"

    • "h"

  • resolve (bool) – Resolve given path before publishing

  • ttl (Union[str, int, None]) – Time duration this record should be cached for. Same syntax like ‘lifetime’ option. (experimental feature)

  • key (Optional[str]) – Name of the key to be used, as listed by ‘ipfs key list’.

Returns

dict

Name

Key ID of the key to which the given value was published

Value

Value that was published

resolve(name=None, recursive=False, nocache=False, dht_record_count=None, dht_timeout=None, **kwargs)

Retrieves the value currently published at the given IPNS name

IPNS is a PKI namespace, where names are the hashes of public keys, and the private key enables publishing new (signed) values. In resolve, the default value of name is your own identity public key.

>>> client.name.resolve()
{'Path': '/ipfs/QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d'}
Parameters
  • name (Optional[str]) – The IPNS name to resolve (defaults to the connected node)

  • recursive (bool) – Resolve until the result is not an IPFS name (default: false)

  • nocache (bool) – Do not use cached entries (default: false)

  • dht_record_count (Optional[int]) – Number of records to request for DHT resolution.

  • dht_timeout (Union[str, int, None]) –

    Maximum time to collect values during DHT resolution, e.g. “30s”.

    For the exact syntax see the lifetime argument on publish(). Set this parameter to 0 to disable the timeout.

Returns

dict

Path

The resolved value of the given name

property chunk_size
Return type

int

class object
class patch

Creates a new merkledag object based on an existing one

The new object will have an additional link to the given CID.

>>> client.object.patch.add_link(
...     'QmR79zQQj2aDfnrNgczUhvf2qWapEfQ82YQRt3QjrbhSb2',
...     'Johnny',
...     'QmR79zQQj2aDfnrNgczUhvf2qWapEfQ82YQRt3QjrbhSb2'
... )
{'Hash': 'QmNtXbF3AjAk59gQKRgEdVabHcSsiPUnJwHnZKyj2x8Z3k'}
Parameters
  • root (str) – IPFS hash for the object being modified

  • name (str) – name for the new link

  • ref (str) – IPFS hash for the object being linked to

  • create (bool) – Create intermediary nodes

Returns

dict

Hash

Hash of the newly derived object

append_data(cid, new_data, **kwargs)

Creates a new merkledag object based on an existing one

The new object will have the same links as the previous object, but with the provided data appended to it.

>>> client.object.patch.append_data("QmZZmY … fTqm", io.BytesIO(b"bla"))
{'Hash': 'QmR79zQQj2aDfnrNgczUhvf2qWapEfQ82YQRt3QjrbhSb2'}
Parameters
Returns

dict

Hash

Hash of the newly derived object

Creates a new merkledag object based on an existing one

The new object will lack a link to the specified object, but otherwise be unchanged.

>>> client.object.patch.rm_link(
...     'QmNtXbF3AjAk59gQKRgEdVabHcSsiPUnJwHnZKyj2x8Z3k',
...     'Johnny'
... )
{'Hash': 'QmR79zQQj2aDfnrNgczUhvf2qWapEfQ82YQRt3QjrbhSb2'}
Parameters
  • root (str) – IPFS hash of the object to modify

  • link (str) – name of the link to remove

Returns

dict

Hash

Hash of the newly derived object

set_data(root, data, **kwargs)

Creates a new merkledag object based on an existing one

The new object will have the same links as the old object but with the provided data instead of the old object’s data contents.

>>> client.object.patch.set_data(
...     'QmNtXbF3AjAk59gQKRgEdVabHcSsiPUnJwHnZKyj2x8Z3k',
...     io.BytesIO(b'bla')
... )
{'Hash': 'QmSw3k2qkv4ZPsbu9DVEJaTMszAQWNgM1FTFYpfZeNQWrd'}
Parameters
Returns

dict

Hash

Hash of the newly derived object

property chunk_size
Return type

int

data(cid, **kwargs)

Returns the raw bytes in an IPFS object

>>> client.object.data('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
b'\x08\x01'
Parameters

cid (str) – Key of the object to retrieve, in CID format

Returns

bytes – Raw object data

diff(a, b, **kwargs)

Diff two cids.

>>> client.object.diff(
                'QmdfTbBqBPQ7VNxZEYEj14VmRuZBkqFbiwReogJgS1zR1n',
        'QmV4QR7MCBj5VTi6ddHmXPyjWGzbaKEtX2mx7axA5PA13G'
        )
{'Changes': [{
        'Type': 2,
        'Path': '',
        'Before':
                {'/': 'QmdfTbBqBPQ7VNxZEYEj14VmRuZBkqFbiwReogJgS1zR1n'},
        'After':
                {'/': 'QmV4QR7MCBj5VTi6ddHmXPyjWGzbaKEtX2mx7axA5PA13G'}}]}
Parameters
  • a (str) – Key of object a for comparison

  • b (str) – Key of object b for comparison

Returns

dict

get(cid, **kwargs)

Get and serialize the DAG node named by CID.

>>> client.object.get('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
{'Data': '',
 'Links': [
        {'Hash': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV',
         'Name': 'Makefile',          'Size': 174},
        {'Hash': 'QmeKozNssnkJ4NcyRidYgDY2jfRZqVEoRGfipkgath71bX',
         'Name': 'example',           'Size': 1474},
        {'Hash': 'QmZAL3oHMQYqsV61tGvoAVtQLs1WzRe1zkkamv9qxqnDuK',
         'Name': 'home',              'Size': 3947},
        {'Hash': 'QmZNPyKVriMsZwJSNXeQtVQSNU4v4KEKGUQaMT61LPahso',
         'Name': 'lib',               'Size': 268261},
        {'Hash': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTiYwKir8eXJY',
         'Name': 'published-version', 'Size': 55}
]}
Parameters

cid (str) – Key of the object to retrieve, in CID format

Returns

dict

Data

Raw object data (ISO-8859-1 decoded)

Links

List of links associated with the given object

Returns the links pointed to by the specified object

>>> client.object.links('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDx … ca7D')
{'Hash': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D',
 'Links': [
        {'Hash': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV',
         'Name': 'Makefile',          'Size': 174},
        {'Hash': 'QmeKozNssnkJ4NcyRidYgDY2jfRZqVEoRGfipkgath71bX',
         'Name': 'example',           'Size': 1474},
        {'Hash': 'QmZAL3oHMQYqsV61tGvoAVtQLs1WzRe1zkkamv9qxqnDuK',
         'Name': 'home',              'Size': 3947},
        {'Hash': 'QmZNPyKVriMsZwJSNXeQtVQSNU4v4KEKGUQaMT61LPahso',
         'Name': 'lib',               'Size': 268261},
        {'Hash': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTiYwKir8eXJY',
         'Name': 'published-version', 'Size': 55}]}
Parameters

cid (str) – Key of the object to retrieve, in CID format

Returns

dict

Hash

The requested object CID

Links

List of links associated with the given object

new(template=None, **kwargs)

Creates a new object from an IPFS template

By default this creates and returns a new empty merkledag node, but you may pass an optional template argument to create a preformatted node.

>>> client.object.new()
{'Hash': 'QmdfTbBqBPQ7VNxZEYEj14VmRuZBkqFbiwReogJgS1zR1n'}
Parameters

template (Optional[str]) –

Blueprints from which to construct the new object. Possible values:

  • "unixfs-dir"

  • None

Returns

dict

Hash

The hash of the requested empty object

put(file, **kwargs)

Stores input as a DAG object and returns its key.

>>> client.object.put(io.BytesIO(b'''
...       {
...           "Data": "another",
...           "Links": [ {
...               "Name": "some link",
...               "Hash": "QmXg9Pp2ytZ14xgmQjYEiHjVjMFXzCV … R39V",
...               "Size": 8
...           } ]
...       }'''))
{'Hash': 'QmZZmY4KCu9r3e7M2Pcn46Fc5qbn6NpzaAGaYb22kbfTqm',
 'Links': [
        {'Hash': 'QmXg9Pp2ytZ14xgmQjYEiHjVjMFXzCVVEcRTWJBmLgR39V',
         'Size': 8, 'Name': 'some link'}
 ]
}
Parameters

file (Union[str, PathLike, bytes, IO[bytes], int]) – (JSON) object from which the DAG object will be created

Returns

dict – Hash and links of the created DAG object

See the links() method for details.

stat(cid, **kwargs)

Get stats for the DAG node named by cid.

>>> client.object.stat('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
{'LinksSize': 256, 'NumLinks': 5,
 'Hash': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D',
 'BlockSize': 258, 'CumulativeSize': 274169, 'DataSize': 2}
Parameters

cid (str) – Key of the object to retrieve, in CID format

Returns

dict

property chunk_size
Return type

int

class pin
add(path, *paths, recursive=True, **kwargs)

Pins objects to the node’s local repository

Stores an IPFS object(s) from a given path in the local repository.

>>> client.pin.add("QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d")
{'Pins': ['QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d']}
Parameters
  • path (str) – Path to object(s) to be pinned

  • recursive (bool) – Recursively unpin the object linked to by the specified object(s)

Returns

dict

Pins

List of IPFS objects that have been pinned by this action

ls(*paths, type='all', **kwargs)

Lists objects pinned in the local repository

By default, all pinned objects are returned, but the type flag or arguments can restrict that to a specific pin type or to some specific objects respectively. In particular the type="recursive" argument will only list objects added .pin.add(…) (or similar) and will greatly speed processing as obtaining this list does not require a complete repository metadata scan.

>>> client.pin.ls()
{'Keys': {
        'QmNNPMA1eGUbKxeph6yqV8ZmRkdVat … YMuz': {'Type': 'recursive'},
        'QmNPZUCeSN5458Uwny8mXSWubjjr6J … kP5e': {'Type': 'recursive'},
        'QmNg5zWpRMxzRAVg7FTQ3tUxVbKj8E … gHPz': {'Type': 'indirect'},

        'QmNiuVapnYCrLjxyweHeuk6Xdqfvts … wCCe': {'Type': 'indirect'}
}}

>>> # While the above works you should always try to use `type="recursive"`
>>> # instead as it will greatly speed up processing and only lists
>>> # explicit pins (added with `.pin.add(…)` or similar), rather than
>>> # than all objects that won't be removed as part of `.repo.gc()`:
>>> client.pin.ls(type="recursive")
{'Keys': {
        'QmNNPMA1eGUbKxeph6yqV8ZmRkdVat … YMuz': {'Type': 'recursive'},
        'QmNPZUCeSN5458Uwny8mXSWubjjr6J … kP5e': {'Type': 'recursive'},

}}

>>> client.pin.ls('/ipfs/QmNNPMA1eGUbKxeph6yqV8ZmRkdVat … YMuz')
{'Keys': {
        'QmNNPMA1eGUbKxeph6yqV8ZmRkdVat … YMuz': {'Type': 'recursive'}}}

>>> client.pin.ls('/ipfs/QmdBCSn4UJP82MjhRVwpABww48tXL3 … mA6z')
ipfshttpclient.exceptions.ErrorResponse:
        path '/ipfs/QmdBCSn4UJP82MjhRVwpABww48tXL3 … mA6z' is not pinned
Parameters
  • paths (str) –

    The IPFS paths or CIDs to search for

    If none are passed, return information about all pinned objects. If any of the passed CIDs is not pinned, then remote will return an error and an ErrorResponse exception will be raised.

  • type (str) –

    The type of pinned keys to list. Can be:

    • "direct"

    • "indirect"

    • "recursive"

    • "all"

Raises

ErrorResponse – Remote returned an error. Remote will return an error if any of the passed CIDs is not pinned. In this case, the exception will contain ‘not pinned’ in its args[0].

Returns

dict

Keys

Mapping of IPFS object names currently pinned to their types

rm(path, *paths, recursive=True, **kwargs)

Removes a pinned object from local storage

Removes the pin from the given object allowing it to be garbage collected if needed. That is, depending on the node configuration it may not be garbage anytime soon or at all unless you manually clean up the local repository using gc().

Also note that an object is pinned both directly (that is its type is "recursive") and indirectly (meaning that it is referenced by another object that is still pinned) it may not be removed at all after this.

>>> client.pin.rm('QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d')
{'Pins': ['QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d']}
Parameters
  • path (str) – Path to object(s) to be unpinned

  • recursive (bool) – Recursively unpin the object linked to by the specified object(s)

Returns

dict

Pins

List of IPFS objects that have been unpinned by this action

update(from_path, to_path, *, unpin=True, **kwargs)

Replaces one pin with another

Updates one pin to another, making sure that all objects in the new pin are local. Then removes the old pin. This is an optimized version of using first using add() to add a new pin for an object and then using rm() to remove the pin for the old object.

>>> client.pin.update("QmXMqez83NU77ifmcPs5CkNRTMQksBLkyfBf4H5g1NZ52P",
...              "QmUykHAi1aSjMzHw3KmBoJjqRUQYNkFXm8K1y7ZsJxpfPH")
{"Pins": ["/ipfs/QmXMqez83NU77ifmcPs5CkNRTMQksBLkyfBf4H5g1NZ52P",
                  "/ipfs/QmUykHAi1aSjMzHw3KmBoJjqRUQYNkFXm8K1y7ZsJxpfPH"]}
Parameters
  • from_path (str) – Path to the old object

  • to_path (str) – Path to the new object to be pinned

  • unpin (bool) – Should the pin of the old object be removed?

Returns

dict

Pins

List of IPFS objects that have been affected by this action

verify(path, *paths, verbose=False, **kwargs)

Verifies that all recursive pins are completely available in the local repository

Scan the repo for pinned object graphs and check their integrity. Issues will be reported back with a helpful human-readable error message to aid in error recovery. This is useful to help recover from datastore corruptions (such as when accidentally deleting files added using the filestore backend).

This function returns an iterator has to be exhausted or closed using either a context manager (with-statement) or its .close() method.

>>> with client.pin.verify("QmN…TTZ", verbose=True) as pin_verify_iter:
...     for item in pin_verify_iter:
...         print(item)
...
{"Cid":"QmVkNdzCBukBRdpyFiKPyL2R15qPExMr9rV9RFV2kf9eeV","Ok":True}
{"Cid":"QmbPzQruAEFjUU3gQfupns6b8USr8VrD9H71GrqGDXQSxm","Ok":True}
{"Cid":"Qmcns1nUvbeWiecdGDPw8JxWeUfxCV8JKhTfgzs3F8JM4P","Ok":True}

Parameters
  • path (str) – Path to object(s) to be checked

  • verbose (bool) – Also report status of items that were OK?

Returns

Iterable[dict]

Cid

IPFS object ID checked

Ok

Whether the given object was successfully verified

property chunk_size
Return type

int

class pubsub
ls(**kwargs)

Lists subscribed topics by name

This method returns data that contains a list of all topics the user is subscribed to. In order to subscribe to a topic pubsub.sub must be called.

# subscribe to a channel
>>> with client.pubsub.sub("hello") as sub:
...     client.pubsub.ls()
{
        'Strings' : ["hello"]
}
Returns

dict

Strings

List of topic the IPFS daemon is subscribbed to

peers(topic=None, **kwargs)

Lists the peers we are pubsubbing with

Lists the IDs of other IPFS users who we are connected to via some topic. Without specifying a topic, IPFS peers from all subscribed topics will be returned in the data. If a topic is specified only the IPFS id’s of the peers from the specified topic will be returned in the data.

>>> client.pubsub.peers()
{'Strings':
                [
                        'QmPbZ3SDgmTNEB1gNSE9DEf4xT8eag3AFn5uo7X39TbZM8',
                        'QmQKiXYzoFpiGZ93DaFBFDMDWDJCRjXDARu4wne2PRtSgA',
                        ...
                        'QmepgFW7BHEtU4pZJdxaNiv75mKLLRQnPi1KaaXmQN4V1a'
                ]
}

## with a topic

# subscribe to a channel
>>> with client.pubsub.sub('hello') as sub:
...     client.pubsub.peers(topic='hello')
{'String':
                [
                        'QmPbZ3SDgmTNEB1gNSE9DEf4xT8eag3AFn5uo7X39TbZM8',
                        ...
                        # other peers connected to the same channel
                ]
}
Parameters

topic (Optional[str]) – The topic to list connected peers of (defaults to None which lists peers for all topics)

Returns

dict

Strings

List of PeerIDs of peers we are pubsubbing with

publish(topic, payload, **kwargs)

Publish a message to a given pubsub topic

Publishing will publish the given payload (string) to everyone currently subscribed to the given topic.

All data (including the ID of the publisher) is automatically base64 encoded when published.

# publishes the message 'message' to the topic 'hello'
>>> client.pubsub.publish('hello', 'message')
[]
Parameters
  • topic (str) – Topic to publish to

  • payload (str) – Data to be published to the given topic

Returns

list – An empty list

subscribe(topic, discover=False, **kwargs)

Subscribes to mesages on a given topic

Subscribing to a topic in IPFS means anytime a message is published to a topic, the subscribers will be notified of the publication.

The connection with the pubsub topic is opened and read. The Subscription returned should be used inside a context manager to ensure that it is closed properly and not left hanging.

>>> sub = client.pubsub.subscribe('testing')
>>> with client.pubsub.subscribe('testing') as sub:
...     # publish a message 'hello' to the topic 'testing'
...     client.pubsub.publish('testing', 'hello')
...     for message in sub:
...             print(message)
...             # Stop reading the subscription after
...             # we receive one publication
...             break
{'from': '<base64encoded IPFS id>',
 'data': 'aGVsbG8=',
 'topicIDs': ['testing']}

# NOTE: in order to receive published data
# you must already be subscribed to the topic at publication
# time.
Parameters
  • topic (str) – Name of a topic to subscribe to

  • discover (bool) – Try to discover other peers subscibed to the same topic (defaults to False)

Returns

SubChannel – Generator wrapped in a context manager that maintains a connection stream to the given topic.

property chunk_size
Return type

int

class repo
gc(*, quiet=False, return_result=True, **kwargs)

Removes stored objects that are not pinned from the repo

>>> client.repo.gc()
[{'Key': 'QmNPXDC6wTXVmZ9Uoc8X1oqxRRJr4f1sDuyQuwaHG2mpW2'},
 {'Key': 'QmNtXbF3AjAk59gQKRgEdVabHcSsiPUnJwHnZKyj2x8Z3k'},
 {'Key': 'QmRVBnxUCsD57ic5FksKYadtyUbMsyo9KYQKKELajqAp4q'},

 {'Key': 'QmYp4TeCurXrhsxnzt5wqLqqUz8ZRg5zsc7GuUrUSDtwzP'}]

Performs a garbage collection sweep of the local set of stored objects and remove ones that are not pinned in order to reclaim hard disk space. Returns the hashes of all collected objects.

Parameters
  • quiet (bool) –

    Should the client will avoid downloading the list of removed objects?

    Passing True to this parameter often causing the GC process to speed up tremendously as it will also avoid generating the list of removed objects in the connected daemon at all.

  • return_result (bool) –

    If False this is a legacy alias for quiet=True.

    (Will be dropped in py-ipfs-api-client 0.7.x!)

Returns

dict – List of IPFS objects that have been removed

stat(**kwargs)

Returns local repository status information

>>> client.repo.stat()
{'NumObjects': 354,
 'RepoPath': '…/.local/share/ipfs',
 'Version': 'fs-repo@4',
 'RepoSize': 13789310}
Returns

dict – General information about the IPFS file repository

NumObjects

Number of objects in the local repo.

RepoPath

The path to the repo being currently used.

RepoSize

Size in bytes that the repo is currently using.

Version

The repo version.

property chunk_size
Return type

int

class swarm
class filters
add(address, *addresses, **kwargs)

Adds a given multiaddr filter to the filter/ignore list

This will add an address filter to the daemons swarm. Filters applied this way will not persist daemon reboots, to achieve that, add your filters to the configuration file.

>>> client.swarm.filters.add("/ip4/192.168.0.0/ipcidr/16")
{'Strings': ['/ip4/192.168.0.0/ipcidr/16']}
Parameters

address (Union[str, Multiaddr]) – Multiaddr to avoid connecting to

Returns

dict

Strings

List of swarm filters added

rm(address, *addresses, **kwargs)

Removes a given multiaddr filter from the filter list

This will remove an address filter from the daemons swarm. Filters removed this way will not persist daemon reboots, to achieve that, remove your filters from the configuration file.

>>> client.swarm.filters.rm("/ip4/192.168.0.0/ipcidr/16")
{'Strings': ['/ip4/192.168.0.0/ipcidr/16']}
Parameters

address (Union[str, Multiaddr]) – Multiaddr filter to remove

Returns

dict

Strings

List of swarm filters removed

property chunk_size
Return type

int

addrs(**kwargs)

Returns the addresses of currently connected peers by peer id

>>> pprint(client.swarm.addrs())
{'Addrs': {
        'QmNMVHJTSZHTWMWBbmBrQgkA1hZPWYuVJx2DpSGESWW6Kn': [
                '/ip4/10.1.0.1/tcp/4001',
                '/ip4/127.0.0.1/tcp/4001',
                '/ip4/51.254.25.16/tcp/4001',
                '/ip6/2001:41d0:b:587:3cae:6eff:fe40:94d8/tcp/4001',
                '/ip6/2001:470:7812:1045::1/tcp/4001',
                '/ip6/::1/tcp/4001',
                '/ip6/fc02:2735:e595:bb70:8ffc:5293:8af8:c4b7/tcp/4001',
                '/ip6/fd00:7374:6172:100::1/tcp/4001',
                '/ip6/fd20:f8be:a41:0:c495:aff:fe7e:44ee/tcp/4001',
                '/ip6/fd20:f8be:a41::953/tcp/4001'],
        'QmNQsK1Tnhe2Uh2t9s49MJjrz7wgPHj4VyrZzjRe8dj7KQ': [
                '/ip4/10.16.0.5/tcp/4001',
                '/ip4/127.0.0.1/tcp/4001',
                '/ip4/172.17.0.1/tcp/4001',
                '/ip4/178.62.107.36/tcp/4001',
                '/ip6/::1/tcp/4001'],

}}
Returns

dict – Multiaddrs of peers by peer id

Addrs

Mapping of PeerIDs to a list its advertised multiaddrs

connect(address, *addresses, **kwargs)

Attempts to connect to a peer at the given multiaddr

This will open a new direct connection to a peer address. The address format is an IPFS multiaddr, e.g.:

/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
>>> client.swarm.connect("/ip4/104.131.131.82/tcp/4001/ipfs/Qma … uvuJ")
{'Strings': ['connect QmaCpDMGvV2BGHeYERUEnRQAwe3 … uvuJ success']}
Parameters

address (Union[str, Multiaddr]) – Address of peer to connect to

Returns

dict – Textual connection status report

disconnect(address, *addresses, **kwargs)

Closes any open connection to a given multiaddr

This will close a connection to a peer address. The address format is an IPFS multiaddr:

/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ

The disconnect is not permanent; if IPFS needs to talk to that address later, it will reconnect. To avoid this, add a filter for the given address before disconnecting.

>>> client.swarm.disconnect("/ip4/104.131.131.82/tcp/4001/ipfs/Qm … uJ")
{'Strings': ['disconnect QmaCpDMGvV2BGHeYERUEnRQA … uvuJ success']}
Parameters

address (Union[str, Multiaddr]) – Address of peer to disconnect from

Returns

dict – Textual connection status report

peers(**kwargs)

Returns the addresses & IDs of currently connected peers

>>> client.swarm.peers()
{'Strings': [
        '/ip4/101.201.40.124/tcp/40001/ipfs/QmZDYAhmMDtnoC6XZ … kPZc',
        '/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYER … uvuJ',
        '/ip4/104.223.59.174/tcp/4001/ipfs/QmeWdgoZezpdHz1PX8 … 1jB6',

        '/ip6/fce3: … :f140/tcp/43901/ipfs/QmSoLnSGccFuZQJzRa … ca9z'
]}
Returns

dict

Strings

List of Multiaddrs that the daemon is connected to

property chunk_size
Return type

int

class unstable

Features that are subject to change and are only provided for convenience

class log
level(subsystem, level, **kwargs)

Changes the logging output level for a given subsystem

This API is subject to future change or removal!

>>> client.unstable.log.level("path", "info")
{"Message": "Changed log level of 'path' to 'info'\n"}
Parameters
  • subsystem (str) – The subsystem logging identifier (Use "all" for all subsystems)

  • level (str) –

    The desired logging level. Must be one of:

    • "debug"

    • "info"

    • "warning"

    • "error"

    • "fatal"

    • "panic"

Returns

dict

Status

Textual status report

ls(**kwargs)

Lists the available logging subsystems

This API is subject to future change or removal!

>>> client.unstable.log.ls()
{'Strings': [
        'github.com/ipfs/go-libp2p/p2p/host', 'net/identify',
        'merkledag', 'providers', 'routing/record', 'chunk', 'mfs',
        'ipns-repub', 'flatfs', 'ping', 'mockrouter', 'dagio',
        'cmds/files', 'blockset', 'engine', 'mocknet', 'config',
        'commands/http', 'cmd/ipfs', 'command', 'conn', 'gc',
        'peerstore', 'core', 'coreunix', 'fsrepo', 'core/server',
        'boguskey', 'github.com/ipfs/go-libp2p/p2p/host/routed',
        'diagnostics', 'namesys', 'fuse/ipfs', 'node', 'secio',
        'core/commands', 'supernode', 'mdns', 'path', 'table',
        'swarm2', 'peerqueue', 'mount', 'fuse/ipns', 'blockstore',
        'github.com/ipfs/go-libp2p/p2p/host/basic', 'lock', 'nat',
        'importer', 'corerepo', 'dht.pb', 'pin', 'bitswap_network',
        'github.com/ipfs/go-libp2p/p2p/protocol/relay', 'peer',
        'transport', 'dht', 'offlinerouting', 'tarfmt', 'eventlog',
        'ipfsaddr', 'github.com/ipfs/go-libp2p/p2p/net/swarm/addr',
        'bitswap', 'reprovider', 'supernode/proxy', 'crypto', 'tour',
        'commands/cli', 'blockservice']}
Returns

dict

Strings

List of daemon logging subsystems

tail(**kwargs)

Streams log outputs as they are generated

This API is subject to future change or removal!

This function returns an iterator that needs to be closed using a context manager (with-statement) or using the .close() method.

>>> with client.unstable.log.tail() as log_tail_iter:
...     for item in log_tail_iter:
...         print(item)
...
{"event":"updatePeer","system":"dht",
 "peerID":"QmepsDPxWtLDuKvEoafkpJxGij4kMax11uTH7WnKqD25Dq",
 "session":"7770b5e0-25ec-47cd-aa64-f42e65a10023",
 "time":"2016-08-22T13:25:27.43353297Z"}
{"event":"handleAddProviderBegin","system":"dht",
 "peer":"QmepsDPxWtLDuKvEoafkpJxGij4kMax11uTH7WnKqD25Dq",
 "session":"7770b5e0-25ec-47cd-aa64-f42e65a10023",
 "time":"2016-08-22T13:25:27.433642581Z"}
{"event":"handleAddProvider","system":"dht","duration":91704,
 "key":"QmNT9Tejg6t57Vs8XM2TVJXCwevWiGsZh3kB4HQXUZRK1o",
 "peer":"QmepsDPxWtLDuKvEoafkpJxGij4kMax11uTH7WnKqD25Dq",
 "session":"7770b5e0-25ec-47cd-aa64-f42e65a10023",
 "time":"2016-08-22T13:25:27.433747513Z"}
{"event":"updatePeer","system":"dht",
 "peerID":"QmepsDPxWtLDuKvEoafkpJxGij4kMax11uTH7WnKqD25Dq",
 "session":"7770b5e0-25ec-47cd-aa64-f42e65a10023",
 "time":"2016-08-22T13:25:27.435843012Z"}

Returns

Iterable[dict]

property chunk_size
Return type

int

class refs
local(**kwargs)

Returns the hashes of all local objects

This API is subject to future change or removal!

>>> client.unstable.refs.local()
[{'Ref': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7 … cNMV', 'Err': ''},

 {'Ref': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTi … eXJY', 'Err': ''}]
Returns

list

property chunk_size
Return type

int

property chunk_size
Return type

int

add(file, *files, recursive=False, pattern=None, trickle=False, follow_symlinks=False, period_special=True, only_hash=False, wrap_with_directory=False, chunker=None, pin=True, raw_leaves=None, nocopy=False, cid_version=None, **kwargs)

Adds a file, several files or directory of files to IPFS

Arguments marked as “directories only” will be ignored unless file refers to a directory path or file descriptor. Passing a directory file descriptor is currently restricted to Unix (due to Python standard library limitations on Windows) and will prevent the nocopy feature from working.

>>> with io.open('nurseryrhyme.txt', 'w', encoding='utf-8') as f:
...     numbytes = f.write('Mary had a little lamb')
>>> client.add('nurseryrhyme.txt')
{'Hash': 'QmZfF6C9j4VtoCsTp4KSrhYH47QMd3DNXVZBKaxJdhaPab',
 'Name': 'nurseryrhyme.txt'}

Directory uploads

By default only regular files and directories immediately below the given directory path/FD are uploaded to the connected IPFS node; to upload an entire directory tree instead, recursive can be set to True. Symbolic links and special files (pipes, sockets, devices nodes, …) cannot be represented by the UnixFS data structure this call creates and hence are ignored while scanning the target directory, to include the targets of symbolic links in the upload set follow_symlinks to True.

The set of files and directories included in the upload may be restricted by passing any combination of glob matching strings, compiled regular expression objects and custom Matcher objects. A file or directory will be included if it matches of the patterns provided. For regular expressions please note that as predicting which directories are relevant to the given pattern is impossible to do reliably if recursive is set to True the entire directory hierarchy will always be scanned and compared to the given expression even if only very few files are actually matched by the expression. To avoid this, pass a custom matching class or use glob-patterns instead (which will only cause a scan of the directories required to match their value).

Note that unlike the ipfs add CLI interface this implementation will be default include dot-files (“files that are hidden”) – any file or directory whose name starts with a period/dot character – in the upload. For behaviour that is similar to the CLI command set pattern to "**" – this enables the default glob behaviour of not matching dot-files unless period_special is set to False or the pattern actually starts with a period.

Parameters
  • file (Union[str, PathLike, bytes, IO[bytes], int]) – A filepath, path-object, file descriptor or open file object the file or directory to add

  • recursive (bool) – Upload files in subdirectories, if file refers to a directory?

  • pattern (Union[Iterable[Union[AnyStr, Pattern, Matcher[AnyStr]]], AnyStr, Pattern, Matcher[AnyStr], None]) – A glob pattern, compiled regular expression object or arbitrary matcher used to limit the files and directories included as part of adding a directory (directories only)

  • trickle (bool) – Use trickle-dag format (optimized for streaming) when generating the dag; see the old FAQ for more information

  • follow_symlinks (bool) – Follow symbolic links when recursively scanning directories? (directories only)

  • period_special (bool) –

    Treat files and directories with a leading period character (“dot-files”) specially in glob patterns? (directories only)

    If this is set these files will only be matched by path labels whose initial character is a period, but not by those starting with ?, * or [.

  • only_hash (bool) – Only chunk and hash, but do not write to disk

  • wrap_with_directory (bool) – Wrap files with a directory object to preserve their filename

  • chunker (Optional[str]) – The chunking algorithm to use

  • pin (bool) – Pin this object when adding

  • raw_leaves (Optional[bool]) – Use raw blocks for leaf nodes. (experimental). (Default: True when nocopy is True, or False otherwise)

  • nocopy (bool) – Add the file using filestore. Implies raw-leaves. (experimental).

  • cid_version (Optional[int]) – CID version. Default value is provided by IPFS daemon. (experimental)

Returns

Union[dict, list] – File name and hash of the added file node, will return a list of one or more items unless only a single file (not directory) was given

add_bytes(data, **kwargs)[source]

Adds a set of bytes as a file to IPFS.

>>> client.add_bytes(b"Mary had a little lamb")
'QmZfF6C9j4VtoCsTp4KSrhYH47QMd3DNXVZBKaxJdhaPab'

Also accepts and will stream generator objects.

Parameters

data (bytes) – Content to be added as a file

Returns

str – Hash of the added IPFS object

add_json(json_obj, **kwargs)[source]

Adds a json-serializable Python dict as a json file to IPFS.

>>> client.add_json({'one': 1, 'two': 2, 'three': 3})
'QmVz9g7m5u3oHiNKHj2CJX1dbG1gtismRS3g9NaPBBLbob'
Parameters

json_obj (dict) – A json-serializable Python dictionary

Returns

str – Hash of the added IPFS object

add_str(string, **kwargs)[source]

Adds a Python string as a file to IPFS.

>>> client.add_str(u"Mary had a little lamb")
'QmZfF6C9j4VtoCsTp4KSrhYH47QMd3DNXVZBKaxJdhaPab'

Also accepts and will stream generator objects.

Parameters

string (str) – Content to be added as a file

Returns

str – Hash of the added IPFS object

apply_workarounds()[source]
Query version information of the referenced daemon and enable any

workarounds known for the corresponding version

Returns

dict – The version information returned by the daemon

cat(cid, offset=0, length=None, **kwargs)

Retrieves the contents of a file identified by hash

>>> client.cat('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
Traceback (most recent call last):
  ...
ipfsapi.exceptions.Error: this dag node is a directory
>>> client.cat('QmeKozNssnkJ4NcyRidYgDY2jfRZqVEoRGfipkgath71bX')
b'<!DOCTYPE html>\n<html>\n\n<head>\n<title>ipfs example viewer</…'
Parameters
  • cid (str) – The name or path of the IPFS object(s) to be retrieved

  • offset (int) – Byte offset to begin reading from

  • length (Optional[int]) – Maximum number of bytes to read (defaults to reading the entire file)

Returns

bytes – The file’s contents

close()[source]

Close any currently open client session and free any associated resources.

If there was no session currently open this method does nothing. An open session is not a requirement for using a Client object and as such all method defined on it will continue to work, but a new TCP connection will be established for each and every API call invoked. Such a usage should therefor be avoided and may cause a warning in the future. See the class’s description for details.

dns(domain_name, recursive=False, **kwargs)

Resolves DNS links to their referenced dweb-path

CIDs are hard to remember, but domain names are usually easy to remember. To create memorable aliases for CIDs, DNS TXT records can point to other DNS links, IPFS objects, IPNS keys, etc. This command resolves those links to the referenced object.

For example, with this DNS TXT record:

>>> import dns.resolver
>>> a = dns.resolver.query("ipfs.io", "TXT")
>>> a.response.answer[0].items[0].to_text()
'"dnslink=/ipfs/QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n"'

The resolver will give:

>>> client.dns("ipfs.io")
{'Path': '/ipfs/QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n'}
Parameters
  • domain_name (str) – The domain-name name to resolve

  • recursive (bool) – Resolve until the name is not a DNS link

Returns

dict

Path

Resource were a DNS entry points to

get(cid, target='.', **kwargs)

Downloads a file, or directory of files from IPFS

Parameters
  • cid (str) – The path to the IPFS object(s) to be outputted

  • target (Union[str, PathLike, bytes]) –

    The directory to place the downloaded files in

    Defaults to the current working directory.

Return type

None

get_json(cid, **kwargs)[source]

Loads a json object from IPFS.

>>> client.get_json('QmVz9g7m5u3oHiNKHj2CJX1dbG1gtismRS3g9NaPBBLbob')
{'one': 1, 'two': 2, 'three': 3}
Parameters

cid (Union[str, cid.CIDv0, cid.CIDv1]) – CID of the IPFS object to load

Returns

object – Deserialized IPFS JSON object value

id(peer=None, **kwargs)

Returns general information of an IPFS Node

Returns the PublicKey, ProtocolVersion, ID, AgentVersion and Addresses of the connected daemon or some other node.

>>> client.id()
{'ID': 'QmVgNoP89mzpgEAAqK8owYoDEyB97MkcGvoWZir8otE9Uc',
'PublicKey': 'CAASpgIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggE … BAAE=',
'AgentVersion': 'go-libp2p/3.3.4',
'ProtocolVersion': 'ipfs/0.1.0',
'Addresses': [
        '/ip4/127.0.0.1/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8owYo … E9Uc',
        '/ip4/10.1.0.172/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8owY … E9Uc',
        '/ip4/172.18.0.1/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8owY … E9Uc',
        '/ip6/::1/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8owYoDEyB97 … E9Uc',
        '/ip6/fccc:7904:b05b:a579:957b:deef:f066:cad9/tcp/400 … E9Uc',
        '/ip6/fd56:1966:efd8::212/tcp/4001/ipfs/QmVgNoP89mzpg … E9Uc',
        '/ip6/fd56:1966:efd8:0:def1:34d0:773:48f/tcp/4001/ipf … E9Uc',
        '/ip6/2001:db8:1::1/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8 … E9Uc',
        '/ip4/77.116.233.54/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8 … E9Uc',
        '/ip4/77.116.233.54/tcp/10842/ipfs/QmVgNoP89mzpgEAAqK … E9Uc']}
Parameters

peer (Optional[str]) – Peer.ID of the node to look up (local node if None)

Returns

dict – Information about the IPFS node

ls(cid, **kwargs)

Returns a list of objects linked to by the given hash

>>> client.ls('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D')
{'Objects': [
        {'Hash': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D',
                'Links': [
                        {'Hash': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV',
                         'Name': 'Makefile',          'Size': 174, 'Type': 2},

                        {'Hash': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTiYwKir8eXJY',
                         'Name': 'published-version', 'Size': 55,  'Type': 2}
                ]
        }
]}
Parameters

cid (str) – The path to the IPFS object(s) to list links from

Returns

dict – Directory information and contents

ping(peer, *peers, count=10, **kwargs)

Provides round-trip latency information for the routing system.

Finds nodes via the routing system, sends pings, waits for pongs, and prints out round-trip latency information.

>>> client.ping("QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n")
[{'Success': True,  'Time': 0,
  'Text': 'Looking up peer QmTzQ1JRkWErjk39mryYw2WVaphAZN … c15n'},
 {'Success': False, 'Time': 0,
  'Text': 'Peer lookup error: routing: not found'}]

Hint

Pass stream=True to receive ping progress reports as they arrive.

Parameters
  • peer (str) – ID of peer(s) to be pinged

  • count (int) – Number of ping messages to send

Returns

list – Progress reports from the ping

resolve(path, recursive=False, **kwargs)

Resolves an dweb-path and return the path of the referenced item

There are a number of mutable name protocols that can link among themselves and into IPNS. For example IPNS references can (currently) point at an IPFS object, and DNS links can point at other DNS links, IPNS entries, or IPFS objects. This command accepts any of these identifiers.

>>> client.resolve("/ipfs/QmTkzDwWqPbnAh5YiV5VwcTLnGdw … ca7D/Makefile")
{'Path': '/ipfs/Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV'}
>>> client.resolve("/ipns/ipfs.io")
{'Path': '/ipfs/QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n'}
Parameters
  • path (str) – The name to resolve

  • recursive (bool) – Resolve until the result is an IPFS name

Returns

dict

Path

IPFS path of the requested resource

stop()

Stops the connected IPFS daemon instance

Sending any further requests after this will fail with ConnectionError, unless you start another IPFS daemon instance at the same address.

version(**kwargs)

Returns the software versions of the currently connected node

>>> client.version()
{'Version': '0.4.3-rc2', 'Repo': '4', 'Commit': '',
 'System': 'amd64/linux', 'Golang': 'go1.6.2'}
Returns

dict – Daemon and system version information