Perplexity

The Perplexity component is an AI component that allows users to connect the AI models served on the Perplexity Platform. It can carry out the following tasks:

Release Stage

Alpha

Configuration

The component definition and tasks are defined in the definition.yaml and tasks.yaml files respectively.

Setup

In order to communicate with Perplexity, the following connection details need to be provided. You may specify them directly in a pipeline recipe as key-value pairs within the component's setup block, or you can create a Connection from the Integration Settings page and reference the whole setup as setup: ${connection.<my-connection-id>}.

FieldField IDTypeNote
API Keyapi-keystringFill in your API key from the vendor's platform.

Supported Tasks

Chat

Generate response base on conversation input.

InputField IDTypeDescription
Task ID (required)taskstringTASK_CHAT
Chat Data (required)dataobjectInput data.
Input ParameterparameterobjectInput parameter.
Input Objects in Chat

Chat Data

Input data.

FieldField IDTypeNote
Chat MessagesmessagesarrayList of chat messages.
Model NamemodelstringThe model to be used for TASK_CHAT.
Enum values
  • sonar
  • sonar-pro
  • llama-3.1-sonar-small-128k-online
  • llama-3.1-sonar-large-128k-online
  • llama-3.1-sonar-huge-128k-online

Chat Messages

List of chat messages.

FieldField IDTypeNote
ContentcontentarrayThe message content.
NamenamestringAn optional name for the participant. Provides the model information to differentiate between participants of the same role.
RolerolestringThe message role, i.e. 'system', 'user' or 'assistant'.
Enum values
  • system
  • user
  • assistant

Content

The message content.

FieldField IDTypeNote
Text MessagetextstringText message.
TexttypestringText content type.

Input Parameter

Input parameter.

FieldField IDTypeNote
Enable Search Classifierenable-search-classifierbooleanWhether to enable search classifier.
Frequency Penaltyfrequency-penaltynumberA multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. Incompatible with presence_penalty.
Last Updated After Filterlast-updated-after-filterstringFilters search results to only include content last updated after this date. Format should be %m/%d/%Y (e.g. 3/1/2025)
Last Updated Before Filterlast-updated-before-filterstringFilters search results to only include content last updated before this date. Format should be %m/%d/%Y (e.g. 3/1/2025)
Max New Tokensmax-tokensintegerThe maximum number of completion tokens returned by the API. The total number of tokens requested in max_tokens plus the number of prompt tokens sent in messages must not exceed the context window token limit of model requested. If left unspecified, then the model will generate tokens until either it reaches its stop token or the end of its context window.
Presence Penaltypresence-penaltynumberA value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty.
Return Related Questionsreturn-related-questionsbooleanDetermines whether related questions should be returned.
Search After Date Filtersearch-after-date-filterstringFilters search results to only include content published after this date. Format should be %m/%d/%Y (e.g. 3/1/2025)
Search Before Date Filtersearch-before-date-filterstringFilters search results to only include content published before this date. Format should be %m/%d/%Y (e.g. 3/1/2025)
Search Domain Filtersearch-domain-filterstringGiven a list of domains, limit the citations used by the online model to URLs from the specified domains. Currently limited to only 3 domains for whitelisting and blacklisting. For blacklisting add a - to the beginning of the domain string.
Search Modesearch-modestringControls the search mode used for the request. When set to 'academic', results will prioritize scholarly sources like peer-reviewed papers and academic journals. When set to 'sec', results will prioritize financial and legal sources.
Enum values
  • academic
  • web
  • sec
Search Recency Filtersearch-recency-filterstringReturns search results within the specified time interval - does not apply to images. Values include month, week, day, year.
StreamstreambooleanIf set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they become available.
TemperaturetemperaturenumberThe amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic.
Top Ktop-knumberThe number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. We recommend either altering top_k or top_p, but not both.
Top Ptop-pnumberThe nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass. We recommend either altering top_k or top_p, but not both.
Web Search Optionsweb-search-optionsobjectConfiguration for using web search in model responses.

Web Search Options

Configuration for using web search in model responses.

FieldField IDTypeNote
Search Context Sizesearch-context-sizestringDetermines how much search context is retrieved for the model. Options are: low (minimizes context for cost savings but less comprehensive answers), medium (balanced approach suitable for most queries), and high (maximizes context for comprehensive answers but at higher cost).
Enum values
  • low
  • medium
  • high
User Locationuser-locationobjectTo refine search results based on geography, you can specify an approximate user location.

User Location

To refine search results based on geography, you can specify an approximate user location.

FieldField IDTypeNote
CountrycountrystringThe two letter ISO country code of the user's location.
LatitudelatitudenumberThe latitude of the user's location.
LongitudelongitudenumberThe longitude of the user's location.
OutputField IDTypeDescription
Output DatadataobjectOutput data.
Output Metadata (optional)metadataobjectOutput metadata.
Output Objects in Chat

Output Data

FieldField IDTypeNote
ChoiceschoicesarrayList of chat completion choices.
CitationscitationsarrayList of citations.
Search Resultssearch-resultsarrayA list of search results related to the response.

Choices

FieldField IDTypeNote
CreatedcreatedintegerThe timestamp of when the chat completion was created. Format is in ISO 8601. Example: 2024-07-01T11:47:40.388Z.
Finish Reasonfinish-reasonstringThe reason the model stopped generating tokens.
IndexindexintegerThe index of the choice in the list of choices.
MessagemessageobjectA chat message generated by the model.

Message

FieldField IDTypeNote
ContentcontentstringThe contents of the message.
RolerolestringThe role of the author of this message.

Search Results

FieldField IDTypeNote
DatedatestringThe date of the search result.
TitletitlestringThe title of the search result.
URLurlstringThe URL of the search result.

Output Metadata

FieldField IDTypeNote
UsageusageobjectUsage statistics for the request.

Usage

FieldField IDTypeNote
Completion Tokenscompletion-tokensintegerNumber of tokens in the generated response.
Prompt Tokensprompt-tokensintegerNumber of tokens in the prompt.
Total Tokenstotal-tokensintegerTotal number of tokens used in the request (prompt + completion).

Example Recipes

version: v1beta

variable:
  prompt:
    type: string
    title: Prompt

component:
  perplexity-0:
    type: perplexity
    task: TASK_CHAT
    input:
      data:
        model: sonar
        messages:
          - content:
            - text: Be precise and concise.
              type: text
            role: system
          - content:
            - text: ${variable.prompt}
              type: text
            role: user
            name: Miles
      parameter:
        max-tokens: 500
        temperature: 0.2
        top-p: 0.9
        stream: false
        search-domain-filter:
          - perplexity.ai
        search-recency-filter: month
        top-k: 0
        presence-penalty: 0
        frequency-penalty: 1

output:
  perplexity:
    title: Perplexity
    value: ${perplexity-0.output}