What is the Prompt Optimizer for?
Don't dump one long prompt into the LLM and consume tokens or time unnecessarily. Instead, use the Prompt Optimizer to identify deterministic/quantitative questions that can be preprocessed without the LLM API, therefore reducing your API costs and improving accuracy in responses. This is suited for prompts that are to be repeatedly run using LLM APIs and include simple data analysis, calculations, or structured data manipulation.
How to use the Prompt Optimizer?
Paste your prompt and the give two sample data files representing the data sources you plan to use with the prompt. Note the (prompt + data sample) has to represent one task that you would run with an LLM API. The tool will first analyze the prompt to identify quantitative & qualitative questions, within it. Then, using the two data samples, it will check which of the quantitative questions can be answered using the contents of the data sample files and if the answers have a discernible structure & position in the data sample files - if yes, then that quantitative question can be preprocessed, if no, then it cannot be preprocessed. The final output will contain a list of quantitative questions can be preprocessed and those that cannot be preprocessed and the qualitative questions that will require LLM processing. Use your API key.
Note that questions that require reasoning or multi step execution will likely be classified under qualitative questions. For additional details & a product walkthrough, click the Doge icon on the right. Or write to us at support@theindiaportfolio.com
Tool Terms: Please Read Before Using
This tool runs entirely in your browser. None of the data you provide (API Key, prompt, data files) is ever sent to or stored on our servers. Consequently, all information will be permanently lost when you close this browser tab. This service is provided "as-is" without warranty. By using this tool, you assume all responsibility for your inputs and outputs. Use responsibly. AI can make mistakes.
Be mindful of model context length based on your API's subscription tier.