We have all been there recently. A client calls you up, completely energized. They just saw a flashy new ChatGPT feature online. Suddenly, they demand “AI” sprinkled all over their existing project. Figuring out the best way to handle this used to be a massive headache. You would wrestle with raw HTTP requests and confusing API docs.
Luckily, the openai-php/laravel package changes the game entirely. It is the absolute best way to handle Laravel AI integration right now. In this post, we will look at exactly how to drop it into your codebase. We will build real features you can actually use immediately. We won’t just build a useless, generic chatbot.
Grab a coffee. Open up your code editor. Let’s get your Laravel app talking directly to the future.
Why AI Is No Longer Optional in Laravel Apps (2026)
Let’s be brutally honest about web development right now. Users expect significantly smarter applications today. They absolutely refuse to read a massive wall of text. They want a crisp, accurate summary instantly. They hate waiting two full days for a standard support reply.
If your application feels dumb, your competitors win easily. Adding an AI-powered Laravel app feature is not a gimmick. It is a strict, non-negotiable requirement for modern platforms. Freelance clients feel intense pressure to modernize their businesses. They pass that exact pressure right down to us developers. We must adapt our programming skillsets quickly.
The good news? You do not need a machine learning degree. You just need a solid API client and basic PHP knowledge. We can build incredibly smart features with very little code.
What Is the openai-php/laravel Package?
So, what exactly is this specific tool? It is a super clean, community-maintained wrapper. It sits cleanly around the official OpenAI PHP client. You can view the source code directly at github.com/openai-php/laravel. Brilliant developers in our community actively maintain it.
You could theoretically use Laravel’s native HTTP facade. I tried that painful route initially myself. It gets incredibly messy and complex fast. You have to manage custom headers manually every time. You must format massive JSON payloads perfectly. Catching weird API exceptions gets exhausting quickly.
This specific ChatGPT Laravel package handles all that tedious boilerplate. It gives you a beautiful, fluent facade. It feels completely native to the modern Laravel ecosystem. It remains the undisputed winner for API communication.
Installing and Setting Up openai-php/laravel in Your Laravel Project
Let’s get our hands dirty right now. Pulling this into your active project takes two minutes. First, fire up your local terminal window. Run the standard composer command below.
composer require openai-php/laravelOnce that download finishes, you must publish the configuration. This exposes the config file to your application directory. You can tweak the underlying client settings there later.
php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"This creates a config/openai.php file in your project. You don’t need to touch it much — all the important stuff goes in your .env file, which is exactly how Laravel developers like it.
Next, open up your project’s .env file. You need to securely add your OPENAI_API_KEY. You grab a fresh key directly from platform.openai.com.
OPENAI_API_KEY=your-api-key-here
OPENAI_ORGANIZATION=your-org-id-hereThat is literally the entire setup process. Laravel’s incredible auto-discovery handles the rest automatically. You are ready to start making requests immediately.
To get your API key, head to platform.openai.com, sign in, go to API Keys in the left sidebar, and click Create new secret key. Give it a name like “Laravel App” so you remember what it’s for. Copy it immediately — OpenAI only shows it once.
Paste it into your .env, run php artisan config:clear, and you’re fully set up. No service provider registration, no manual binding — Laravel auto-discovers the package. That’s it. Genuinely.
Making Your First API Call to ChatGPT
Now that we have everything installed, let’s test it. We will write a simple controller method today. We want to ensure everything connects properly to the server. We use the standard create method here.
Let’s write some actual code. Generate a controller:
php artisan make:controller AiControllerOpen it up and add a simple test method:
<?php
namespace App\Http\Controllers;
use OpenAI\Laravel\Facades\OpenAI;
use Illuminate\Http\Request;
class AiController extends Controller
{
public function test()
{
// Send a chat message to the OpenAI API
$result = OpenAI::chat()->create([
'model' => 'gpt-4o', // Which AI model to use
'messages' => [
[
'role' => 'user', // 'user' = the person asking
'content' => 'Explain Laravel in one sentence for a beginner.',
],
],
'max_tokens' => 100, // Limit how long the response can be
]);
// Extract the text from the response object
$reply = $result->choices[0]->message->content;
return response()->json(['reply' => $reply]);
}
}Wire it up in routes/web.php:
use App\Http\Controllers\AiController;
Route::get('/ai-test', [AiController::class, 'test']);Visit /ai-test in your browser. If everything is configured correctly, you’ll see a JSON response with ChatGPT’s answer. That little moment — seeing the AI response come back in your own Laravel app for the first time — is honestly pretty satisfying.
A quick note on choices[0]: OpenAI returns an array of possible responses. By default, you get one. You can request multiple with the n parameter, but for 99% of use cases, choices[0] is all you need.
Also notice the max_tokens parameter. Always set it. We’ll talk more about this in the cost section, but it’s a good habit to start right now.
Example #1 — Auto-Generate a Blog Post Summary
This is probably the most immediately useful thing you can build. You have a long blog post — maybe 800 words — and you want a short 2–3 sentence summary generated automatically. Perfect for meta descriptions, preview cards, or email newsletters.
Here’s the full controller method:
public function summarize(Request $request)
{
$request->validate([
'content' => 'required|string|min:100',
]);
$blogContent = $request->input('content');
$result = OpenAI::chat()->create([
'model' => 'gpt-4o-mini', // Cheaper model, works great for summaries
'messages' => [
[
'role' => 'system',
// The system message shapes how the AI behaves throughout the conversation
'content' => 'You are a helpful assistant that writes concise blog summaries.
Always respond in exactly 2–3 sentences.
Be clear, direct, and avoid marketing language.',
],
[
'role' => 'user',
'content' => 'Please summarize this blog post: ' . $blogContent,
],
],
'max_tokens' => 150,
]);
$summary = $result->choices[0]->message->content;
return response()->json(['summary' => $summary]);
}And the Blade template to display the result:
@if(isset($summary))
<div class="mt-4 p-4 bg-gray-100 rounded-lg">
<h3 class="font-semibold text-gray-700 mb-2">AI-Generated Summary</h3>
<p class="text-gray-600">{{ $summary }}</p>
</div>
@endifA note on prompt engineering: That system message is doing a lot of heavy lifting. Without it, you might get a five-paragraph essay when you wanted two sentences. The more specific you are in the system prompt, the more consistent and usable your results become.
Try different variations. “Always respond in bullet points” gives you a different result than “write in a professional tone.” That’s prompt engineering — and it’s a skill worth developing even at this basic level.
Example #2 — Auto-Reply to a Support Ticket
This one is a genuine time-saver. Support agents spend hours writing variations of the same polite replies. With this feature, the AI drafts a suggested response and the agent just reviews, adjusts if needed, and sends. That cuts response time significantly without removing the human judgment.
Here’s the controller:
public function draftTicketReply(Request $request)
{
$request->validate([
'ticket_message' => 'required|string',
'customer_name' => 'nullable|string',
]);
$ticketContent = $request->input('ticket_message');
$customerName = $request->input('customer_name', 'there');
$result = OpenAI::chat()->create([
'model' => 'gpt-4o-mini',
'messages' => [
[
'role' => 'system',
'content' => 'You are a friendly, professional customer support agent.
Write a helpful reply to the support ticket provided.
Be empathetic, offer a clear next step or solution,
and address the customer by their first name.
Keep the reply under 150 words.',
],
[
'role' => 'user',
'content' => "Customer name: {$customerName}\n\nSupport ticket: {$ticketContent}",
],
],
'max_tokens' => 250,
]);
$draftReply = $result->choices[0]->message->content;
return response()->json(['draft_reply' => $draftReply]);
}Display it as an editable textarea in your support dashboard:
<div class="draft-reply mt-4">
<label class="block font-semibold mb-1 text-gray-700">
AI Draft Reply — Review before sending:
</label>
<textarea
name="reply"
rows="6"
class="w-full border rounded p-3 text-gray-800"
>{{ $draftReply }}</textarea>
<button type="submit" class="mt-2 px-4 py-2 bg-blue-600 text-white rounded">
Send Reply
</button>
</div>One thing I want to emphasize: always make the reply editable before sending. Never auto-send AI responses to customers. Besides the accuracy risk, it’s also just bad practice from a trust perspective. The human agent needs to stay in the loop — AI should assist, not replace.
In practice, most of the time the agent reads the draft, maybe tweaks a sentence, and hits send. You’re cutting their effort by 70–80% while keeping full human oversight. That’s the sweet spot.
Tips to Keep Your OpenAI API Costs Low
Nobody warns you about this early enough. API costs can sneak up fast if you’re not thoughtful from day one. Here’s what actually works:
- Use
gpt-4o-miniby default. It handles the majority of tasks — summaries, classifications, simple replies — at a fraction of the cost ofgpt-4o. Only upgrade to the full model when you actually need complex reasoning or nuanced output. - Always set
max_tokens. This is your most direct cost control. A summary doesn’t need 1000 tokens. Cap it at 150. A support reply? Cap it at 250. Be deliberate about it. - Cache repeated prompts. If your app generates the same type of response for common inputs, cache it with Laravel Cache instead of calling the API every single time.
$faqReply = Cache::remember('faq-shipping-reply', 3600, function () {
return OpenAI::chat()->create([
'model' => 'gpt-4o-mini',
'messages' => [
['role' => 'user', 'content' => 'What is your shipping policy?']
],
'max_tokens' => 200,
])->choices[0]->message->content;
});- Rate limit your AI routes. One user shouldn’t be able to hammer your endpoint and burn through your credits. Laravel’s built-in throttle middleware handles this in one line:
Route::middleware('throttle:10,1')->group(function () {
Route::post('/summarize', [AiController::class, 'summarize']);
Route::post('/draft-reply', [AiController::class, 'draftTicketReply']);
});- Log your token usage during development. The API response includes usage data — check
$result->usage->totalTokens. Log it while you’re in development so you understand your real consumption before you go live and start getting surprised.
These five habits alone will keep your OpenAI bills reasonable even as you scale up usage.
We covered a lot of ground here. You installed openai-php/laravel, configured your API key, made your first live ChatGPT call, and built two features that are genuinely useful in production — an automatic blog summarizer and an AI-drafted support reply system. That’s a real AI-powered Laravel app, not a toy demo.
My honest advice: start small. Pick one of these features, ship it, watch how it performs, and then decide where to add AI next. You don’t need to rebuild your whole app at once. One good AI feature that works reliably is worth ten half-baked ones.
The openai-php/laravel package makes it easy to expand later too — streaming responses, embeddings, image generation, and more are all available through the same clean interface once you’re comfortable with the basics.
Got stuck somewhere in the setup? Drop a comment below — happy to help debug. And if you’re building something interesting with this, I’d genuinely love to hear about it. Follow me for more Laravel + AI tutorials — we’ve barely scratched the surface.


Leave a Reply