Hi, I'm Valerio a software engineer from Italy. This guide is for all PHP developers that have an application online with real users, but they need a deeper understanding of how to introduce (or drastically improve) scalability in their system using Laravel queues. The first time I read about Laravel was in late 2013 at the beginning of version 5.x of the framework. I wasn't yet a developer involved in significant projects, and one of the aspects of modern frameworks, especially in Laravel, that sounded the most mysterious to me was "Queues". Reading the documentation, I guessed at the potential, but without real development experience it stayed just a theory in my mind. Today I'm the creator of Inspector.dev that executes thousands of jobs every hour, so my knowledge of this architecture is much better than in the past. a real-time monitoring dashboard In this article I'm going to show you how I discovered queues and jobs, and what configurations helped me to treat a large amount of data in real time while keeping server resources cost-friendly. A gentle introduction When a PHP application receives an incoming http request, our code is executed sequentially until the request's execution ends and a response is returned back to the client (e.g., the user’s browser). step by step That synchronous behavior is really intuitive, predictable, and simple to understand. I launch an http request to my endpoint, the application retrieves data from the database, converts it into an appropriate format, execute some additional task, and sends it back. It's linear. Queues and jobs introduce asynchronous behaviors that break this linear flow. That's why I think these functions seemed a little strange to me at the beginning. But sometimes a time-consuming task involves completing an execution cycle due to an incoming http request, e.g., sending an email notification to all team members of a project. It could mean sending six or ten emails and it could take four or five seconds to be completed. So every time a user clicks on that button, they need to wait five seconds before they can continue using the app. More the application grows, the worse this problem gets. What is a Job? A Job is a class that implements the " " method that will contains the logic we want to execute asynchronously. handle \ \ \ ; \ \ \ ; \ \ ; \ \ ; { , , ; $url; { ->url = $url; } { file_get_contents( ->url); } } <?php use Illuminate Contracts Queue ShouldQueue use Illuminate Foundation Bus Dispatchable use Illuminate Queue InteractsWithQueue use Illuminate Bus Queueable class CallExternalAPI implements ShouldQueue use Dispatchable InteractsWithQueue Queueable /** * string */ @var protected /** * Create a new job instance. * * array $Data */ @param public function __construct ($url) $this /** * Execute what you want. * * void * \Throwable */ @return @throws public function handle () $this As mentioned above the main reason to ecapsulate a piece of code into a Job task is to execute a time consuming task without forcing the user to wait its execution. What do you mean with "time consuming tasks"? This is a legitimate question. Sending emails is the most common example used in articles that talk about queues, but I want to tell you what I needed to do in my real experience. As product owner it's really important for me to keep users' journey information in sync with our marketing and customer support tools. So, based on user actions, we update user information to various external software via APIs (a.k.a. external http calls) for marketing and customer care purposes. One of the most used endpoint in my application could sends 10 emails and execute 3 http call to external services to be completed. No user would wait all this time, much more likely they would stop using my application. Thanks to queues, I can encapsulate all these tasks in dedicated classes, pass in the contructor the information needed to do their job, and schedule their execution for later in the background so my controller can return a response immediately. { { $project = Project::create($request->all()); Notification::queue( NotifyMembers($project->owners)); ->dispatch( TagUserAsActive($project->owners)); ->dispatch( NotifyToProveSource($project->owners)); $project; } } <?php class ProjectController public function store (Request $request) // Defer NotifyMembers, TagUserActive, NotifyToProveSource // passing the information needed to do their job new $this new $this new return I don't need to wait until all of these processes are completed before returning a response; rather, I'll wait only for the time needed to publish them in the queue. This could mean the difference between 10 seconds and 10 milliseconds!!! Who executes these jobs after posting them in the queue? This is a classic "publisher/consumer" architecture. We've just published our jobs in the queue from the controller, so now we are going to understand how the queue is consumed, and finally jobs executed. To consume a queue we need to run one of the most popular artisan command: php artisan :work queue As reported in the : Laravel documentation Laravel includes a queue worker that will process new jobs as they are pushed onto the queue. Great! Laravel provides a ready-to-use interface to put jobs in a queue and a ready-to-use command to pull jobs from the queue and execute their code in the background. The role of Supervisor This was another "strange thing" at the beginning. I think it's normal discovering new things. Also I have experienced this phase of study so I write these articles to help me organize my skills and at the same time I can help other developers to expand their knowledge. If a job fails wen firing an exception, the queue:work command will stop its work. To keep the queue:work process running permanently (consuming your queues), you should use a process monitor such as to ensure that the command does not stop running even if a job fires an exception. Supervisor queue:work The supervisor restarts the command after it goes down, starting again from the next job, abandoning the one that failed. Jobs will be executed in the background on your server, no longer depending on an HTTP request. This introduces some changes that I had to consider when implementing the job’s code. Here is the most important in my mind: How can I be aware if the job code fails? Running in background you can't see immediately if your job generate errors. You will no longer have immediate feedback like running an http request from your browser. If the job fails he will do it silently, without anyone noticing. Consider integrating a real-time monitoring tool like Inspector to bring to the surface every drawback. You don’t have the http request . Your code will be executed from cli. The http request is gone If you need request parameters to accomplish your tasks you need to pass them in the job's constructor to use later during execution: { $data; { ->data = $data; } } ->dispatch( TagUserJob($request->all())); <?php // A job class example class TagUserJob implements ShouldQueue public public function __construct (array $data) $this // Put the job in the queue from your controller $this new You don’t know who the logged user is In the same way, you won't know the identity of the user who was logged in, so if you need the user information to accomplish the task, you need to pass the user object to the job's constructor: The session is gone. { $user; { ->user= $user; } } ->dispatch( TagUserJob($request->user())); <?php // A job class example class TagUserJob implements ShouldQueue public public function __construct (User $user) $this // Put the job in the queue from your controller $this new Understand how to scale Unfortunately in many cases it isn't enought. Using a single queue and consumer it may soon become useless. Queues are FIFO buffers ( irst n irst ut). If you schedule many jobs, also of different types, they need to wait for others to execute their scheduled tasks before being completed F I F O There are two ways to scale: Multiple consumers for a queue In this way, five jobs will be pulled from the queue at a time, speeding up the queue's consumption. Single-purpose queues You could also create specific queues for each job "type" you are launching, with a dedicated consumer for each queue. In this way, each queue will be consumed independently without having to wait for the execution of the other types of jobs. Towards Horizon that gives you full control over how many queues you want to set up and the ability to organize consumers, allowing developers to put these two strategies together and implement one that fits your scalability needs. Laravel Horizon is a queue manager It all starts by running php artisan horizon instead php artisan queue:work. This command scans your configuration file & starts a number of queue workers based on the configuration: horizon.php => [ => [ => , => [ , , ], => , => , => , ] ] <?php 'production' 'supervisor-1' 'connection' "redis" 'queue' 'adveritisement' 'logs' 'phones' 'processes' 9 'tries' 3 'balance' 'simple' // could be simple, auto, or null In the example above Horizon will start three queues with three processes assigned to consume each queue. As mentioned in Horizon's code-driven approach allows my configuration to stay in source control where my team can collaborate. It's a perfect solution also using a CI tool. Laravel documentation To learn the meaning of the configuration options in details consider to read this beautiful article: https://medium.com/@zechdc/laravel-horizon-number-of-workers-and-job-execution-order-21b9dbec72d7 My own configuration => [ => [ => , => [ , , ], => , => , => , ], ] <?php 'production' 'supervisor-1' 'connection' 'redis' 'queue' 'default' 'ingest' 'notifications' 'balance' 'auto' 'processes' 15 'tries' 3 uses mainly three queues: Inspector is for processes to analyze data from external applications; ingest is used to schedule notifications immediately if an error is detected during data ingestion; notifications is used for other tasks that I don't want interfering with and processes. default ingest notifications Using Horizon knows that the maximum number of processes to be activated is 15, which will be distributed dynamically according to the queues load. balance=auto If the queues are empty, Horizon keeps one process active for each queue, keeping a consumer ready to process the queue immediately if a job is scheduled. Final notes Concurrent background execution could causes many other unpredictable bugs like MySQL "Lock wait timeout exceeded" and many others design issues. Read more here: https://www.inspector.dev/resolve-mysql-lock-wait-timeout-dealing-with-laravel-queues-and-jobs/ New to Inspector? Create a monitoring environment specifically designed for software developers avoiding any server or infrastructure configuration that many developers hate to deal with. Thanks to Inspector, you will never have the need to install things at the server level or make complex configuration inyour cloud infrastructure. Inspector works with a lightweight software library that you can install in your application like any other dependencies. In case of Laravel you have our official at your disposal. Laravel package Developers are not always comfortable installing and configuring software at the server level, because these installations are out of the software development lifecycle, or are even managed by external teams. Visit our website for more details: https://inspector.dev/laravel/ Previously published at https://www.inspector.dev/what-worked-for-me-using-laravel-queues-from-the-basics-to-horizon/