--- sidebar_position: 60 description: Debug and resolve common promptfoo issues with solutions for memory optimization, API configuration, Node.js errors, native builds, and network/proxy setup in your LLM testing pipeline --- # Troubleshooting ## Log Files and Debugging Before troubleshooting specific issues, you can access detailed logs to help diagnose problems: - **View logs directly**: Log files are stored in your config directory at `~/.promptfoo/logs` by default - **Custom log directory**: Set the `PROMPTFOO_LOG_DIR` environment variable to write logs to a different directory (e.g., `PROMPTFOO_LOG_DIR=./logs promptfoo eval`) - **Export logs for sharing**: Use `promptfoo export logs` to create a compressed archive of your log files for debugging or support ### Live Debug Toggle During `promptfoo redteam run`, you can toggle debug logging on and off in real-time without restarting: - **Press `v`** at any time to toggle verbose/debug output - Works only in interactive terminal mode (not in CI or when output is piped) - Useful for investigating issues mid-run without overwhelming log output When you start a scan, you'll see: ``` Tip: Press v to toggle debug output ``` Press `v` to enable debug logs and see detailed request metadata, provider response summaries, and grading status. Sensitive payloads, credentials, and raw response bodies may be redacted or summarized. Press `v` again to return to clean output. :::tip This is especially helpful when a scan seems stuck or you want to understand what's happening with a specific test case. ::: ## Out of memory error If you have a large number of tests or your tests have large outputs, you may encounter an out of memory error. There are several ways to handle this: ### Basic setup Follow **all** of these steps: 1. Do not use the `--no-write` flag. We need to write to disk to avoid memory issues. 2. Use the `--no-table` flag. 3. **Use JSONL format**: `--output results.jsonl` :::tip JSONL format processes results in batches, avoiding memory limits that cause JSON export to fail on large datasets. ::: ### Granular memory optimization You can selectively strip out heavy data from the results using environment variables: ```bash # Strip prompt text (useful if your prompts contain large amounts of text or images) export PROMPTFOO_STRIP_PROMPT_TEXT=true # Strip model outputs (useful if your model generates large responses) export PROMPTFOO_STRIP_RESPONSE_OUTPUT=true # Strip test variables (useful if your test cases contain large datasets) export PROMPTFOO_STRIP_TEST_VARS=true # Strip grading results (useful if you're using model-graded assertions) export PROMPTFOO_STRIP_GRADING_RESULT=true # Strip metadata (useful if you're storing large amounts of custom metadata) export PROMPTFOO_STRIP_METADATA=true ``` You can use any combination of these variables to optimize memory usage while preserving the data you need. ### Increase Node.js memory limit If you're still encountering memory issues after trying the above options, you can increase the amount of memory available to promptfoo by setting the `NODE_OPTIONS` environment variable: ```bash # 8192 MB is 8 GB. Set this to an appropriate value for your machine. NODE_OPTIONS="--max-old-space-size=8192" npx promptfoo eval ``` ## Object template handling When working with complex data structures in templates, you might encounter issues with how objects are displayed or accessed in your prompts and grading rubrics. ### `[object Object]` appears in outputs If you see `[object Object]` in your LLM outputs or grading results, this means JavaScript objects are being converted to their string representation without proper serialization. By default, promptfoo automatically stringifies objects to prevent this issue. **Example problem:** ```yaml prompts: - 'Product: {{product}}' tests: - vars: product: name: 'Headphones' price: 99.99 # Results in: "Product: [object Object]" in outputs ``` **Default solution:** Objects are automatically converted to JSON strings: ```text Product: {"name":"Headphones","price":99.99} ``` ### Accessing object properties in templates If you need to access specific properties of objects in your templates (like `{{ product.name }}`), you can enable direct object access: ```bash export PROMPTFOO_DISABLE_OBJECT_STRINGIFY=true promptfoo eval ``` With this setting enabled, you can use object property access in templates: ```yaml prompts: - 'Product: {{ product.name }} costs ${{ product.price }}' # Results in: "Product: Headphones costs $99.99" ``` ### When to use each approach **Use default behavior (stringified objects) when:** - You want objects as JSON strings in your prompts - Working with existing templates that expect JSON strings - You need maximum compatibility and don't want to see `[object Object]` **Use object property access (`PROMPTFOO_DISABLE_OBJECT_STRINGIFY=true`) when:** - You need to access specific properties like `{{ product.name }}` - Building new templates designed for object navigation - Working with complex nested data structures ## Node.js version mismatch error You might encounter this error after switching Node.js versions: ```text Error: The module '/path/to/node_modules/better-sqlite3/build/Release/better_sqlite3.node' was compiled against a different Node.js version using NODE_MODULE_VERSION 115. This version of Node.js requires NODE_MODULE_VERSION 127. Please try re-compiling or re-installing the module (for instance, using `npm rebuild` or `npm install`). ``` This happens because promptfoo uses native code modules such as `better-sqlite3`, which must be compiled for the active Node.js version. ### Solution: Remove npx cache and reinstall Use the repair path that matches how you run promptfoo. **Project checkout** If you are running promptfoo from a local checkout, switch to the Node.js version you intend to use, then rebuild the native dependency: ```bash npm rebuild better-sqlite3 ``` If the project includes an `.nvmrc`, run `nvm use` first so the active Node.js version matches the project before rebuilding. **Global npm install** If you installed promptfoo globally with npm, reinstall it under the active Node.js version: ```bash npm install -g promptfoo@latest ``` **npx** If the error happens with `npx promptfoo@latest`, remove the cached npx install and run the command again. With newer npm versions, remove only the matching cache entry: ```bash npm cache npx ls npm cache npx rm npx -y promptfoo@latest ``` Use the key shown next to `promptfoo@latest` in `npm cache npx ls`. If your npm version does not support `npm cache npx`, remove the older npx cache directory instead: ```bash rm -rf ~/.npm/_npx && npx -y promptfoo@latest ``` This forces a fresh install for the current Node.js version. ## Native build failures Some dependencies like [better-sqlite3](https://github.com/WiseLibs/better-sqlite3) include native code that must compile locally. Ensure your machine has a C/C++ build toolchain: - **Ubuntu/Debian**: `sudo apt-get install build-essential` - **macOS**: `xcode-select --install` - **Windows**: [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) If the prebuilt binaries fail, force a local build: ```bash npm install --build-from-source # or npm rebuild ``` ## Network and proxy issues If you're behind a corporate proxy or firewall and having trouble connecting to LLM APIs: ### Configure proxy settings Set standard proxy environment variables before running promptfoo: ```bash # Set proxy for HTTPS requests (most common) export HTTPS_PROXY=http://proxy.company.com:8080 # With authentication if needed export HTTPS_PROXY=http://username:password@proxy.company.com:8080 # Exclude specific hosts from proxying export NO_PROXY=localhost,127.0.0.1,internal.domain.com ``` ### Custom CA certificates For environments with custom certificate authorities: ```bash export PROMPTFOO_CA_CERT_PATH=/path/to/ca-bundle.crt ``` ### Verify your configuration Run `promptfoo debug` to see detected proxy settings and verify your network configuration is correct. See the [FAQ](/docs/faq/#how-do-i-configure-promptfoo-for-corporate-networks-or-proxies) for complete proxy and SSL configuration details. ## OpenAI API key is not set If you're using OpenAI, set the `OPENAI_API_KEY` environment variable or add `apiKey` to the provider config. For default text-only model grading and synthesis, you can also install `@openai/codex-sdk`, sign in through the Codex CLI, and let Promptfoo use `openai:codex-sdk` automatically when no higher-priority API credentials are set. This does not cover embedding or moderation providers. If you're not using OpenAI but still receiving this message, you probably have some [model-graded metric](/docs/configuration/expected-outputs/model-graded/) such as `similar` or `moderation` that requires you to [override the grader](/docs/configuration/expected-outputs/model-graded/#overriding-the-llm-grader) or configure an embedding/moderation provider explicitly. Follow the instructions to override the grader, e.g., using the `defaultTest` property: ```yaml defaultTest: options: provider: text: id: azureopenai:chat:gpt-4o-deployment config: apiHost: xxx.openai.azure.com embedding: id: azureopenai:embeddings:text-embedding-ada-002-deployment config: apiHost: xxx.openai.azure.com ``` ## Python/JavaScript tool files require function name If you see errors like `Python files require a function name` when loading tools from Python or JavaScript files, you need to specify the function name that returns the tool definitions. ### Solution Python and JavaScript tool files must specify a function name using the `file://path:function_name` format: ```yaml title="promptfooconfig.yaml" providers: - id: openai:chat:gpt-4.1-mini config: # Correct - specifies function name tools: file://./tools.py:get_tools # or for JavaScript/TypeScript tools: file://./tools.js:getTools ``` The function must return a tool definitions array (can be synchronous or asynchronous): ```python title="tools.py" def get_tools(): return [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } } ] ``` ```javascript title="tools.js" function getTools() { return [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather in a given location', parameters: { type: 'object', properties: { location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA', }, }, required: ['location'], }, }, }, ]; } module.exports = { getTools }; ``` ## How to triage stuck evals When running evals, you may encounter timeout errors, especially when using local providers or when running many concurrent requests. Here's how to fix them: **Common use cases:** - Ensure evaluations complete within a time limit (useful for CI/CD) - Handle custom providers or providers that get stuck - Prevent runaway costs from long-running evaluations You can control two settings: timeout for individual test cases and timeout for the entire evaluation. ### Quick fixes **Set timeouts for individual requests and total evaluation time:** ```bash export PROMPTFOO_EVAL_TIMEOUT_MS=30000 # 30 seconds per request export PROMPTFOO_MAX_EVAL_TIME_MS=300000 # 5 minutes total limit npx promptfoo eval ``` You can also set these values in your `.env` file or Promptfoo config file: ```yaml title="promptfooconfig.yaml" env: PROMPTFOO_EVAL_TIMEOUT_MS: 30000 PROMPTFOO_MAX_EVAL_TIME_MS: 300000 ``` ## Debugging Python When using custom Python providers, prompts, hooks, assertions, etc., you may need to debug your Python code. Here are some tips to help you troubleshoot issues: ### Viewing Python output To see the output from your Python script, including print statements, set the `LOG_LEVEL` environment variable to `debug` when running your eval: ```bash LOG_LEVEL=debug npx promptfoo eval ``` Alternatively, you can use the `--verbose` flag: ```bash npx promptfoo eval --verbose ``` ### Using the Python debugger (pdb) Promptfoo now supports native Python debugging with pdb. To enable it: ```bash export PROMPTFOO_PYTHON_DEBUG_ENABLED=true ``` Then add breakpoints in your Python code: ```python import pdb def call_api(prompt, options, context): pdb.set_trace() # Debugger will pause here # Your code... ``` ### Python Installation and Path Issues If you encounter errors like `spawn py -3 ENOENT` or `Python 3 not found`, promptfoo cannot locate your Python installation. Here's how to resolve this: #### Setting a Custom Python Path Use the `PROMPTFOO_PYTHON` environment variable to specify your Python executable: ```bash # Windows (if Python is installed at a custom location) export PROMPTFOO_PYTHON=C:\Python\3_11\python.exe # macOS/Linux export PROMPTFOO_PYTHON=/usr/local/bin/python3 # Then run your evaluation npx promptfoo eval ``` #### Per-Provider Python Configuration You can also set the Python path for specific providers in your config: ```yaml providers: - id: 'file://my_provider.py' config: pythonExecutable: /path/to/specific/python ``` #### Windows-Specific Issues On Windows, promptfoo tries to detect Python in this order: 1. Provider-specific `pythonExecutable` config (if set) 2. `PROMPTFOO_PYTHON` environment variable (if set) 3. **Windows smart detection**: Uses `where python` command and filters out Microsoft Store stubs 4. `python -c "import sys; print(sys.executable)"` (to get the actual Python path) 5. Common fallback commands: `python`, `python3`, `py -3`, `py` If you don't have the Python launcher (`py.exe`) installed but have Python directly, make sure the `python` command works from your command line. If not, either: - Add your Python installation directory to your PATH - Set `PROMPTFOO_PYTHON` to the full path of your `python.exe` **Common Windows Python locations:** - Microsoft Store: `%USERPROFILE%\AppData\Local\Microsoft\WindowsApps\python.exe` - Direct installer: `C:\Python3X\python.exe` (where X is the version) - Anaconda: `C:\Users\YourName\anaconda3\python.exe` #### Testing Your Python Configuration To verify your Python is correctly configured: ```bash # Test that promptfoo can find your Python python -c "import sys; print(sys.executable)" # If this works but promptfoo still has issues, set PROMPTFOO_PYTHON: export PROMPTFOO_PYTHON=$(python -c "import sys; print(sys.executable)") ``` ### Handling errors If you encounter errors in your Python script, the error message and stack trace will be displayed in the promptfoo output. Make sure to check this information for clues about what might be going wrong in your code. Remember that promptfoo runs your Python script in a separate process, so some standard debugging techniques may not work as expected. Using logging and remote debugging as described above are the most reliable ways to troubleshoot issues in your Python providers. ## Debugging the Database 1. Set environment variables: ```bash export PROMPTFOO_ENABLE_DATABASE_LOGS=true export LOG_LEVEL=debug ``` 2. Run your command: ```bash npx promptfoo eval ``` 3. Disable logging when done: ```bash unset PROMPTFOO_ENABLE_DATABASE_LOGS ``` ## Finding log files Promptfoo logs errors and debug logs to `~/.promptfoo/logs` by default. To inspect them from the CLI: ```bash promptfoo logs --list promptfoo logs ``` Change the location by setting `PROMPTFOO_LOG_DIR` to a different directory. For each run an error log and a debug log will be created.