Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Need more storage? Get a lifetime of 10TB cloud space for just $270.

    Ask HN: Would you use a job board where every listing is verified?

    OpenAI is reportedly pushing back the launch of its ‘adult mode’ even further

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      What the polls say about how Americans are using AI

      February 27, 2026

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026
    • Business

      Need more storage? Get a lifetime of 10TB cloud space for just $270.

      March 8, 2026

      Google PM open-sources Always On Memory Agent, ditching vector databases for LLM-driven persistent memory

      March 8, 2026

      Regulate AWS and Microsoft, says UK cloud provider survey

      March 8, 2026

      Google releases Gemini 3.1 Flash Lite at 1/8th the cost of Pro

      March 4, 2026

      Huawei Watch GT Series

      March 4, 2026
    • Crypto

      Banks Respond to Kraken’s Federal Reserve Access as Trump Sides with Crypto

      March 4, 2026

      Hyperliquid and DEXs Break the Top 10 — Is the CEX Era Ending?

      March 4, 2026

      Consensus Hong Kong 2026: The Institutional Turn 

      March 4, 2026

      New Crypto Mutuum Finance (MUTM) Reports V1 Protocol Progress as Roadmap Enters Phase 3

      March 4, 2026

      Bitcoin Short Sellers Caught Off Guard in New White House Move

      March 4, 2026
    • Technology

      Ask HN: Would you use a job board where every listing is verified?

      March 8, 2026

      OpenAI is reportedly pushing back the launch of its ‘adult mode’ even further

      March 8, 2026

      NASA’s DART spacecraft changed a binary asteroid’s orbit around the sun, in a first for a human-made object

      March 8, 2026

      Forget the Specs. Which MacBook Neo Color Is Best? CNET Weighs In

      March 8, 2026

      OpenAI’s head of robotics resigns following deal with the Department of Defense

      March 8, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Fine-tuning vs. in-context learning: New research guides better LLM customization for real-world tasks
    Technology

    Fine-tuning vs. in-context learning: New research guides better LLM customization for real-world tasks

    TechAiVerseBy TechAiVerseMay 10, 2025No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Fine-tuning vs. in-context learning: New research guides better LLM customization for real-world tasks

    May 9, 2025 5:23 PM

    Credit: VentureBeat made with Midjourney

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    Two popular approaches for customizing large language models (LLMs) for downstream tasks are fine-tuning and in-context learning (ICL). In a recent study, researchers at Google DeepMind and Stanford University explored the generalization capabilities of these two methods. They find that ICL has greater generalization ability (though it comes at a higher computation cost during inference). They also propose a novel approach to get the best of both worlds. 

    The findings can help developers make crucial decisions when building LLM applications for their bespoke enterprise data.

    Testing how language models learn new tricks

    Fine-tuning involves taking a pre-trained LLM and further training it on a smaller, specialized dataset. This adjusts the model’s internal parameters to teach it new knowledge or skills. In-context learning (ICL), on the other hand, doesn’t change the model’s underlying parameters. Instead, it guides the LLM by providing examples of the desired task directly within the input prompt. The model then uses these examples to figure out how to handle a new, similar query.

    The researchers set out to rigorously compare how well models generalize to new tasks using these two methods. They constructed “controlled synthetic datasets of factual knowledge” with complex, self-consistent structures, like imaginary family trees or hierarchies of fictional concepts. 

    To ensure they were testing the model’s ability to learn new information, they replaced all nouns, adjectives, and verbs with nonsense terms, avoiding any overlap with the data the LLMs might have encountered during pre-training. 

    The models were then tested on various generalization challenges. For instance, one test involved simple reversals. If a model was trained that “femp are more dangerous than glon,” could it correctly infer that “glon are less dangerous than femp”? Another test focused on simple syllogisms, a form of logical deduction. If told “All glon are yomp” and “All troff are glon,” could the model deduce that “All troff are yomp”? They also used a more complex “semantic structure benchmark” with a richer hierarchy of these made-up facts to test more nuanced understanding.

    “Our results are focused primarily on settings about how models generalize to deductions and reversals from fine-tuning on novel knowledge structures, with clear implications for situations when fine-tuning is used to adapt a model to company-specific and proprietary information,” Andrew Lampinen, Research Scientist at Google DeepMind and lead author of the paper, told VentureBeat.

    To evaluate performance, the researchers fine-tuned Gemini 1.5 Flash on these datasets. For ICL, they fed the entire training dataset (or large subsets) as context to an instruction-tuned model before posing the test questions.

    The results consistently showed that, in data-matched settings, ICL led to better generalization than standard fine-tuning. Models using ICL were generally better at tasks like reversing relationships or making logical deductions from the provided context. Pre-trained models, without fine-tuning or ICL, performed poorly, indicating the novelty of the test data. 

    “One of the main trade-offs to consider is that, whilst ICL doesn’t require fine-tuning (which saves the training costs), it is generally more computationally expensive with each use, since it requires providing additional context to the model,” Lampinen said. “On the other hand, ICL tends to generalize better for the datasets and models that we evaluated.”

    A hybrid approach: Augmenting fine-tuning

    Building on the observation that ICL excels at flexible generalization, the researchers proposed a new method to enhance fine-tuning: adding in-context inferences to fine-tuning data. The core idea is to use the LLM’s own ICL capabilities to generate more diverse and richly inferred examples, and then add these augmented examples to the dataset used for fine-tuning.

    They explored two main data augmentation strategies:

    1. A local strategy: This approach focuses on individual pieces of information. The LLM is prompted to rephrase single sentences from the training data or draw direct inferences from them, such as generating reversals. 
    2. A global strategy: The LLM is given the full training dataset as context, then prompted to generate inferences by linking a particular document or fact with the rest of the provided information, leading to a longer reasoning trace of relevant inferences.

    When the models were fine-tuned on these augmented datasets, the gains were significant. This augmented fine-tuning significantly improved generalization, outperforming not only standard fine-tuning but also plain ICL. 

    “For example, if one of the company documents says ‘XYZ is an internal tool for analyzing data,’ our results suggest that ICL and augmented finetuning will be more effective at enabling the model to answer related questions like ‘What internal tools for data analysis exist?’” Lampinen said.

    This approach offers a compelling path forward for enterprises. By investing in creating these ICL-augmented datasets, developers can build fine-tuned models that exhibit stronger generalization capabilities.

    This can lead to more robust and reliable LLM applications that perform better on diverse, real-world inputs without incurring the continuous inference-time costs associated with large in-context prompts. 

    “Augmented fine-tuning will generally make the model fine-tuning process more expensive, because it requires an additional step of ICL to augment the data, followed by fine-tuning,” Lampinen said. “Whether that additional cost is merited by the improved generalization will depend on the specific use case. However, it is computationally cheaper than applying ICL every time the model is used, when amortized over many uses of the model.”

    While Lampinen noted that further research is needed to see how the components they studied interact in different settings, he added that their findings indicate that developers may want to consider exploring augmented fine-tuning in cases where they see inadequate performance from fine-tuning alone. 

    “Ultimately, we hope this work will contribute to the science of understanding learning and generalization in foundation models, and the practicalities of adapting them to downstream tasks,” Lampinen said.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleTypical Gamer’s JOGO doubles down on UEFN maps with acquisition of RHQ Creative
    Next Article OpenAI’s New Push for Democratic AI: Another Marketing Gimmick?
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Ask HN: Would you use a job board where every listing is verified?

    March 8, 2026

    OpenAI is reportedly pushing back the launch of its ‘adult mode’ even further

    March 8, 2026

    NASA’s DART spacecraft changed a binary asteroid’s orbit around the sun, in a first for a human-made object

    March 8, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025705 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025292 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025166 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025125 Views
    Don't Miss
    Business Technology March 8, 2026

    Need more storage? Get a lifetime of 10TB cloud space for just $270.

    Need more storage? Get a lifetime of 10TB cloud space for just $270. Image: StackCommerce…

    Ask HN: Would you use a job board where every listing is verified?

    OpenAI is reportedly pushing back the launch of its ‘adult mode’ even further

    NASA’s DART spacecraft changed a binary asteroid’s orbit around the sun, in a first for a human-made object

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Need more storage? Get a lifetime of 10TB cloud space for just $270.

    March 8, 20262 Views

    Ask HN: Would you use a job board where every listing is verified?

    March 8, 20260 Views

    OpenAI is reportedly pushing back the launch of its ‘adult mode’ even further

    March 8, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    Best TV Antenna of 2025

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.