Function amdinfer::inferAsyncOrdered¶
Defined in File client.cpp
Function Documentation¶
-
std::vector<InferenceResponse> amdinfer::inferAsyncOrdered(Client *client, const std::string &model, const std::vector<InferenceRequest> &requests)¶
Makes inference requests in parallel to the specified model. All requests are sent in parallel and the responses are gathered and returned in the same order.
- Parameters
client – a pointer to a client object
model – the model/worker to make inference requests to
requests – a vector of requests
- Returns
std::vector<InferenceResponse>