Struggling with AI agents that loop? Learn how tuning hidden parameters like temperature and token limits creates more reliable, cost-effective systems.

We often treat agents like enchanted prompts, but they’re actually software systems. When they fail, it’s usually because we haven't tuned the 'hidden knobs' or parameters correctly.
Inference parameters are the "hidden knobs" of an LLM engine, such as temperature, top-p, and frequency penalties, that control how a model generates text. While prompts provide the instructions, these parameters determine the statistical behavior of the engine. For example, lowering the temperature can make a model more deterministic and reliable for tasks like data extraction, while adjusting the frequency penalty can prevent an agent from getting stuck in a repetitive loop by penalizing words that have already been used.
A Ghost Action occurs when an AI agent claims to have completed a task—such as booking a flight or sending an email—but never actually executes the underlying tool or API call. This happens because the LLM is designed to be helpful and "hallucinates" a success message to satisfy the user's request. To catch this, developers must move beyond looking at the final text output and instead use execution traces to verify that a "Thought" was actually followed by a technical "Action" span before the "Final Answer" was generated.
Standard agents often use a "greedy" reasoning framework like ReAct, which processes information one step at a time and can lose track of long-term goals. A Plan-and-Solve architecture splits the workload between two specialized agents: a "Planner" that breaks a large objective into a checklist, and an "Executor" that focuses solely on completing one specific step at a time. This reduces the cognitive load on the model, making it less likely to fail when handling intricate, multi-step processes like marketing campaigns or deep research.
Tool Adaptation focuses on improving the "Observations" an agent receives rather than just the "Instructions" it is given. If an agent provides poor answers because its search tool returns irrelevant data, the solution is to refine the retriever or the data source rather than the agent's brain. By ensuring the tools provide high-quality, domain-specific information, even a simpler or smaller LLM can perform at a high level, making the overall system more modular and efficient.
The Sandwich Defense is a security strategy used to protect agents from prompt injection attacks, where users try to override system instructions. Because LLMs are subject to "primacy and recency" effects—meaning they pay the most attention to the beginning and end of a prompt—developers place the most critical safety guardrails at the very end of the prompt, following the user's input. This ensures the agent's final "thought" before generating a response is a reminder to adhere to security protocols regardless of what the user requested.
From Columbia University alumni built in San Francisco
"Instead of endless scrolling, I just hit play on BeFreed. It saves me so much time."
"I never knew where to start with nonfiction—BeFreed’s book lists turned into podcasts gave me a clear path."
"Perfect balance between learning and entertainment. Finished ‘Thinking, Fast and Slow’ on my commute this week."
"Crazy how much I learned while walking the dog. BeFreed = small habits → big gains."
"Reading used to feel like a chore. Now it’s just part of my lifestyle."
"Feels effortless compared to reading. I’ve finished 6 books this month already."
"BeFreed turned my guilty doomscrolling into something that feels productive and inspiring."
"BeFreed turned my commute into learning time. 20-min podcasts are perfect for finishing books I never had time for."
"BeFreed replaced my podcast queue. Imagine Spotify for books — that’s it. 🙌"
"It is great for me to learn something from the book without reading it."
"The themed book list podcasts help me connect ideas across authors—like a guided audio journey."
"Makes me feel smarter every time before going to work"
From Columbia University alumni built in San Francisco
