Python API¶
- exception proteus.BadResponseError¶
- exception proteus.ConnectionError¶
- class proteus.Datatype(value)¶
An enumeration.
- exception proteus.InvalidArgument¶
- exception proteus.ProteusError¶
- proteus.RestClient¶
alias of
proteus.rest.Client
- class proteus.Server(executable: Optional[str] = None, http_port: int = 8998)¶
The Server class provides methods to control the Proteus server.
- start(quiet=False)¶
Start the proteus server
- Parameters
quiet (bool, optional) – Suppress all output if True. Defaults to False.
- stop(kill=False)¶
Stop the proteus server
- Parameters
kill (bool, optional) – Use signal 9 to kill. Defaults to False.
Clients¶
REST¶
This module defines how to communicate to a proteus-server using the REST API.
- Classes:
Client - provides methods to communicate to a proteus-server over REST
- class proteus.rest.Client(address, headers=None)¶
The Client class provides methods to communicate to the proteus-server.
- get_address(command, *args)¶
Get the HTTP address corresponding to a particular command. If the command takes arguments, pass them as well.
- Parameters
command (str) – Name of the command to get the endpoint of
- Returns
HTTP address
- Return type
str
- get_endpoint(command, *args)¶
Get the REST endpoint corresponding to a particular command. If the command takes arguments, pass them as well.
- Parameters
command (str) – Name of the command to get the endpoint of
- Returns
REST endpoint
- Return type
str
- infer(model, request, compress=False)¶
Make an inference request from a model
- Parameters
model (str) – Qualified name of the model to make a request to
request (Request|dict) – Request to make
compress (bool, optional) – Compress the request. Defaults to False.
- Returns
JsonResponse if success, ErrorResponse if failure
- Return type
Response
- infers(models: list, requests)¶
Make multiple inference requests to multiple models. This method launches multiple asynchronous requests simultaneously and blocks until all return. The number of models and requests should match.
- Parameters
models (list) – List containing qualified names of models to make inferences
requests (list) – List containing Requests or dicts
- Returns
List containing responses for each request
- Return type
list
- load(model: str, parameters=None)¶
Load a model
- Parameters
model (str) – Name of the model to load
parameters (dict, optional) – Load-time parameters to pass to Proteus. Defaults to None.
- Returns
Returns an HtmlResponse with the qualified name to make inference requests to
- Return type
Response
- model_ready(model)¶
Check if a particular model is ready
- Parameters
model (str) – Qualified name of the model to check
- Returns
True if ready
- Return type
bool
- server_live()¶
Check if the server is live
- Returns
True if live
- Return type
bool
- unload(model)¶
Unload a model
- Parameters
model (str) – Qualified name of the model to unload
- Returns
HtmlResponse if success, ErrorResponse if failure
- Return type
Response
- wait_until_live()¶
Block until the server is live
- wait_until_stop()¶
Block until the server is dead