Articles

  • Vercel Breached via Context AI OAuth Supply Chain Attack: A Post‑Mortem for AI Engineering Teams
    📄security

    Vercel Breached via Context AI OAuth Supply Chain Attack: A Post‑Mortem for AI Engineering Teams

    An over‑privileged Context AI OAuth app quietly siphons Vercel environment variables, exposing customer credentials through a compromised AI integration. This is a realistic convergence of AI supply c...

    7 min1408 words
  • AI in Art Galleries: How Machine Intelligence Is Rewriting Curation, Audiences, and the Art Market
    🛡️Safety

    AI in Art Galleries: How Machine Intelligence Is Rewriting Curation, Audiences, and the Art Market

    Artificial intelligence has shifted from spectacle to infrastructure in galleries—powering recommendations, captions, forecasting, and experimental pricing.[1][4] For technical teams and leadership...

    7 min1451 words
  • Comment and Control: How Prompt Injection in Code Comments Can Steal API Keys from Claude Code, Gemini CLI, and GitHub Copilot
    📄security

    Comment and Control: How Prompt Injection in Code Comments Can Steal API Keys from Claude Code, Gemini CLI, and GitHub Copilot

    Code comments used to be harmless notes. With LLM tooling, they’re an execution surface. When Claude Code, Gemini CLI, or GitHub Copilot Agents read your repo, they usually see: > system prompt + de...

    7 min1473 words
  • Brigandi Case: How a $110,000 AI Hallucination Sanction Rewrites Risk for Legal AI Systems
    🌀Hallucinations

    Brigandi Case: How a $110,000 AI Hallucination Sanction Rewrites Risk for Legal AI Systems

    When two lawyers in Oregon filed briefs packed with fake cases and fabricated quotations, the result was not a quirky “AI fail”—it was a $110,000 sanction, dismissal with prejudice, and a public ethic...

    7 min1455 words
  • AI Adoption in Galleries: How Intelligent Systems Are Reshaping Curation, Audiences, and the Art Market
    🛡️Safety
    📷 imgix / Unsplash

    AI Adoption in Galleries: How Intelligent Systems Are Reshaping Curation, Audiences, and the Art Market

    1. Why Galleries Are Accelerating AI Adoption Galleries increasingly treat AI as core infrastructure, not an experiment. Interviews with international managers show AI now supports: - On‑site and on...

    7 min1403 words
  • Stanford AI Index 2026: What 22–94% Hallucination Rates Really Mean for LLM Engineering
    🌀Hallucinations

    Stanford AI Index 2026: What 22–94% Hallucination Rates Really Mean for LLM Engineering

    The latest Stanford AI Index from Stanford HAI reports hallucination rates between 22% and 94% across 26 leading large language models (LLMs). For engineers, this confirms LLMs are structurally unfit...

    7 min1406 words
  • Anthropic Claude Mythos Escape: How a Sandbox-Breaking AI Exposed Decades-Old Security Debt
    🛡️Safety

    Anthropic Claude Mythos Escape: How a Sandbox-Breaking AI Exposed Decades-Old Security Debt

    Anthropic never meant for Claude Mythos Preview to touch the public internet during early testing. Researchers put it in an air‑gapped container and told it to probe that setup: break out and email sa...

    7 min1497 words
  • When AI Hallucinates in Court: Inside Oregon’s $110,000 Vineyard Sanctions Case
    🌀Hallucinations

    When AI Hallucinates in Court: Inside Oregon’s $110,000 Vineyard Sanctions Case

    Two Oregon lawyers thought they were getting a productivity boost. Instead, AI‑generated hallucinations helped kill a $12 million lawsuit, triggered $110,000 in sanctions, and produced one of the cl...

    5 min950 words
  • AI Hallucinations, $110,000 Sanctions, and How to Engineer Safer Legal LLM Systems
    🌀Hallucinations

    AI Hallucinations, $110,000 Sanctions, and How to Engineer Safer Legal LLM Systems

    When a vineyard lawsuit ends in dismissal with prejudice and $110,000 in sanctions because counsel relied on hallucinated case law, that is not just an ethics failure—it is a systems‑design failure.[2...

    4 min880 words
  • Experimental AI Use Cases: 8 Wild Systems to Watch Next
    🛡️Safety

    Experimental AI Use Cases: 8 Wild Systems to Watch Next

    AI is escaping the chat window. Enterprise APIs process billions of tokens per minute, over 40% of OpenAI’s revenue is enterprise, and AWS is at a $15B AI run rate.[5] For ML engineers, “weird” dep...

    6 min1286 words
  • ICLR 2026 Integrity Crisis: How AI Hallucinations Slipped Into 50+ Peer‑Reviewed Papers
    🌀Hallucinations

    ICLR 2026 Integrity Crisis: How AI Hallucinations Slipped Into 50+ Peer‑Reviewed Papers

    In 2026, more than fifty accepted ICLR papers were found to contain hallucinated citations, non‑existent datasets, and synthetic “results” generated by large language models—yet they passed peer revie...

    7 min1329 words
  • Beyond Chatbots: Unconventional AI Experiments That Hint at the Next Wave of Capabilities
    🛡️Safety

    Beyond Chatbots: Unconventional AI Experiments That Hint at the Next Wave of Capabilities

    Most engineering teams are still optimizing RAG stacks while AI quietly becomes core infrastructure. OpenAI’s APIs process over 15 billion tokens per minute, with enterprise already >40% of revenue [5...

    7 min1493 words

Topics Covered

🌀

AI Hallucinations

Understanding why LLMs invent information and how to prevent it.

🔍

RAG Best Practices

Retrieval Augmented Generation: architectures, chunking, optimal retrieval.

👻

Ghost Sources

When AI cites sources that don't exist. Detection and prevention.

📉

KB Drift

How to detect and correct knowledge base drift.

✂️

Chunking Strategies

Optimal document splitting for better retrieval.

📊

LLM Evaluation

Metrics and methods to evaluate AI response quality.

⚖️

AI Regulation

Laws, regulations and compliance frameworks governing AI systems.

🛡️

AI Safety

Risks, safeguards and best practices for safe AI deployment.

Need a reliable KB for your AI?

CoreProse builds sourced knowledge bases that minimize hallucinations.