Optional adapterGradient Adapter ID for custom fine tuned models.
Optional cacheOptional callbackUse callbacks instead
Optional callbacksOptional concurrencyUse maxConcurrency instead
Optional gradientGradient AI Access Token. Provide Access Token if you do not wish to automatically pull from env.
Optional inferenceParameters accepted by the Gradient npm package.
Optional maxThe maximum number of concurrent calls that can be made.
Defaults to Infinity, which means no limit.
Optional maxThe maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.
Optional metadataOptional modelGradient AI Model Slug.
Optional onCustom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.
Optional tagsOptional verboseOptional workspaceGradient Workspace Id. Provide workspace id if you do not wish to automatically pull from env.
Generated using TypeDoc
The GradientLLMParams interface defines the input parameters for the GradientLLM class.