AMD Inference Server
0.3.0

About

  • Introduction
    • Features
    • Documentation overview
    • Support
  • Dependencies
    • Docker Image
      • Base Image
      • Ubuntu Focal Repositories
      • Ubuntu PPAs
      • PyPI
      • Github
      • Others
      • Xilinx
      • AMD
    • Included
    • Downloaded Files
  • Roadmap
    • 2022 Q1
    • 2022 Q2
    • 2022 Q3
    • Future
  • Changelog
    • Unreleased
      • Added
      • Changed
      • Deprecated
      • Removed
      • Fixed
    • 0.2.0 - 2022-08-05
      • Added
      • Changed
      • Fixed
    • 0.1.0 - 2022-02-08
      • Added

Quickstart

  • Inference
    • Get the library
    • Running the examples
    • Using the library
  • Deployment
    • Prerequisites
    • Prepare the model repository
    • Get the deployment image
    • Start the image
  • Development
    • Prerequisites
    • Set up the host
    • Get the code
    • Build or get the Docker image
    • Compiling the AMD Inference Server
    • Get test artifacts
    • Run the AMD Inference Server
    • Next steps

Libraries and API

  • C++
    • Clients
      • gRPC
      • HTTP
      • Native
      • WebSocket
    • Core
      • DataType
      • Exceptions
      • Prediction
    • Servers
  • Python
    • Install the Python library
      • Build wheels
    • API
  • REST Endpoints
  • Command-Line Interface
    • Commands
    • Options
    • Sub-commands
      • attach
      • benchmark
      • build
      • clean
      • dockerize
      • get
      • install
      • list
      • make
      • run
      • start
      • test
      • up

Examples

  • Hello World - Python
    • Import the library
    • Create our client and server objects
    • Is AMD Inference Server already running?
    • Load a worker
    • Inference
    • Validate the response
    • Clean up
    • Next steps
  • Running ResNet50 - C++
    • Include the header
    • Start the server
    • Create the client object
    • Load a worker
    • Prepare images
    • Construct requests
    • Make an inference
  • Running ResNet50 - Python
    • Include the module
    • Start the server
    • Create the client object
    • Load a worker
    • Prepare images
    • Construct requests
    • Make an inference

Using the Server

  • Platforms
    • CPUs - ZenDNN
      • Build an image
      • Get assets and models
      • Freezing PyTorch models
      • Run Tests
      • Tune performance
    • GPUs - MIGraphX
      • Set up the host and GPUs
      • Build an image
      • Start an image
      • Get assets and models
    • FPGAs - Vitis AI
      • Set up the host and FPGAs
      • Build an image
      • Start an image
      • Get assets and models
  • Deploying with Docker
    • Build the deployment Docker image
      • Push to a registry
    • Prepare the image for Docker deployment
    • Start the container
    • Make a request
  • Deploying with KServe
    • Set up Kubernetes and KServe
    • Get or build the AMD Inference Server Image
    • Start an inference service
      • Serving Runtime
      • Custom container
    • Making Requests
    • Debugging
  • Performance Factors
    • Hardware
    • Compile the right version
    • Parallelism
      • REST threads
      • Sending requests
      • Duplicating workers

Developers

  • Contributing
    • Ways to Contribute
    • Contributing Code
      • Sign Your Work
    • Style Guide
      • Documentation
  • Architecture
    • Overview
    • Ingestion
      • API
      • HTTP/REST and WebSocket
      • C++ API
    • Batching
    • Workers
      • Organization and Lifecycle
      • Improving Performance
      • External Processing
      • XModel
    • Buffering
    • Manager
    • Observation
      • Logging
      • Metrics
      • Tracing
  • AKS
    • Introduction to AKS
    • Using AKS in AMD Inference Server
  • Logs
    • AMD Inference Server Logs
    • Drogon Logs
  • Benchmarking
    • XModel Benchmarking
    • Kernel Simulation
  • Metrics
    • Quickstart
  • Tracing
    • Quickstart
  • Code Documentation
    • Full API
      • Namespaces
      • Classes and Structs
      • Enums
      • Functions
AMD Inference Server
  • »
  • File base64.cpp
  • View page source

File base64.cpp¶

↰ Parent directory (/workspace/amdinfer/src/amdinfer/util)

Implements base64 encoding/decoding.

Contents

  • Definition (/workspace/amdinfer/src/amdinfer/util/base64.cpp)

  • Includes

  • Namespaces

  • Functions

  • Defines

Definition (/workspace/amdinfer/src/amdinfer/util/base64.cpp)¶

  • Program Listing for File base64.cpp

Includes¶

  • amdinfer/util/base64.hpp (File base64.hpp)

  • b64/decode.h

  • b64/encode.h

  • stdexcept

Namespaces¶

  • Namespace amdinfer

  • Namespace amdinfer::util

Functions¶

  • Function amdinfer::util::base64Decode(const char *, size_t)

  • Function amdinfer::util::base64Decode(std::string)

  • Function amdinfer::util::base64Encode(std::string)

  • Function amdinfer::util::base64Encode(const char *, size_t)

  • Function amdinfer::util::minDecodeLength

  • Function amdinfer::util::minEncodeLength

Defines¶

  • Define BUFFERSIZE


© Copyright 2022 Advanced Micro Devices, Inc.. Last updated on February 15, 2023.

Terms and Conditions | Privacy | Cookie Policy | Trademarks | Statement on Forced Labor | Fair and Open Competition | UK Tax Strategy | Inclusive Terminology | Cookies Settings


Built with Sphinx using a theme provided by Read the Docs.
Read the Docs v: 0.3.0
Languages
en
Versions
0.1.0
0.2.0
0.3.0
main