向春玲:云南如何破解习近平“扶贫三问”
Nginx implements load balancing by default, and the requests are distributed to each server in sequence; 2. Supports least_conn (minimum connection), ip_hash (IP hashing sessions), weighted (weight allocation) and other methods to optimize distribution strategies; 3. Automatic health check and failover, and the detection sensitivity can be adjusted through max_fails and fail_timeout parameters. After configuration, sudo nginx -t test and reload take effect to ensure that the backend can obtain real client information.
Nginx is a powerful and lightweight tool for distributing incoming traffic across multiple servers — this is called load balancing . It helps improve application performance, reliability, and scalability by preventing any single server from getting overwhelmed.

Here's a basic breakdown of how to set up Nginx load balancing:
? 1. Default Load Balancing Method (Round Robin)
By default, Nginx uses round-robin — it distributes requests evenly across all listed servers in order.

Example config:
upstream backend { server 192.168.1.10:8080; server 192.168.1.11:8080; server 192.168.1.12:8080; } server { listen 80; location / { proxy_pass http://backend; } }
This means:

- Request 1 → 192.168.1.10
- Request 2 → 192.168.1.11
- Request 3 → 192.168.1.12
- Request 4 → back to 192.168.1.10
... and so on.
? 2. Other Load Balancing Methods
You can change the algorithm depending on your needs:
Least Connections : Sends requests to the server with the fewest active connections.
upstream backend { least_conn; server 192.168.1.10:8080; server 192.168.1.11:8080; server 192.168.1.12:8080; }
IP Hash : Ensures a client's requests always go to the same server (good for session persistence).
upstream backend { ip_hash; server 192.168.1.10:8080; server 192.168.1.11:8080; server 192.168.1.12:8080; }
Weighted Load Balancing : Give more traffic to stronger servers.
upstream backend { server 192.168.1.10:8080 weight=3; # Gets 3x more traffic server 192.168.1.11:8080 weight=1; server 192.168.1.12:8080 weight=1; }
?? 3. Health Checks & Failover
Nginx automatically marks a server as "down" if it fails to respond — and stops sending traffic to it until it recovers.
You can also manually mark a server as down:
server 192.168.1.11:8080 down;
Or use max_fails
and fail_timeout
to fine-tune:
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
? Pro Tips:
- Always test your config:
sudo nginx -t
- Reload after changes:
sudo systemctl reload nginx
- Use
proxy_set_header Host $host;
andproxy_set_header X-Real-IP $remote_addr;
in yourlocation
block so backend servers see real client info.
That's it — basic Nginx load balancing in a nutshell. Not fancy, but solid and widely used in production settings.
Basically just define your servers, pick a method, and let Nginx do the rest.
The above is the detailed content of Basic Nginx Load Balancing. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The main Nginx configuration file is usually located in the conf directory under /etc/nginx/nginx.conf (Ubuntu/Debian, CentOS/RHEL), /usr/local/etc/nginx/nginx.conf (macOSHomebrew) or the source code installation path; you can view the loaded configuration path through nginx-t, ps-ef|grepnginx check the path specified by the startup parameters, or use find/-namenginx.conf and locatenginx.conf to quickly find; the configuration file structure includes global settings, events blocks and http blocks, and common site configurations are common.

When Nginx experiences a "Toomyopenfiles" error, it is usually because the system or process has reached the file descriptor limit. Solutions include: 1. Increase the soft and hard limits of Linux system, set the relevant parameters of nginx or run users in /etc/security/limits.conf; 2. Adjust the worker_connections value of Nginx to adapt to expected traffic and ensure the overloaded configuration; 3. Increase the upper limit of system-level file descriptors fs.file-max, edit /etc/sysctl.conf and apply changes; 4. Optimize log and resource usage, and reduce unnecessary file handle usage, such as using open_l

Enabling Gzip compression can effectively reduce the size of web page files and improve loading speed. 1. The Apache server needs to add configuration in the .htaccess file and ensure that the mod_deflate module is enabled; 2.Nginx needs to edit the site configuration file, set gzipon and define the compression type, minimum length and compression level; 3. After the configuration is completed, you can verify whether it takes effect through online tools or browser developer tools. Pay attention to the server module status and MIME type integrity during operation to ensure normal compression operation.

The stub_status module displays the real-time basic status information of Nginx. Specifically, it includes: 1. The number of currently active connections; 2. The total number of accepted connections, the total number of processing connections, and the total number of requests; 3. The number of connections being read, written, and waiting. To check whether it is enabled, you can check whether the --with-http_stub_status_module parameter exists through the command nginx-V. If not enabled, recompile and add the module. When enabled, you need to add location blocks to the configuration file and set access control. Finally, reload the Nginx service to access the status page through the specified path. It is recommended to use it in combination with monitoring tools, but it is only available for internal network access and cannot replace a comprehensive monitoring solution.

The "Addressalreadyinuse" error means that another program or service in the system has occupied the target port or IP address. Common reasons include: 1. The server is running repeatedly; 2. Other services occupy ports (such as Apache occupying port 80, causing Nginx to fail to start); 3. The port is not released after crash or restart. You can troubleshoot through the command line tool: use sudolsof-i:80 or sudolnetstat-tulpn|grep:80 in Linux/macOS; use netstat-ano|findstr:80 in Windows and check PID. Solutions include: 1. Stop the conflicting process (such as sudos

The main difference between NginxPlus and open source Nginx is its enhanced functionality and official support for enterprise-level applications. 1. It provides real-time monitoring of the dashboard, which can track the number of connections, request rate and server health status; 2. Supports more advanced load balancing methods, such as minimum connection allocation, hash-based consistency algorithm and weighted distribution; 3. Supports session maintenance (sticky sessions) to ensure that user requests are continuously sent to the same backend server; 4. Allow dynamic configuration updates, and adjust upstream server groups without restarting the service; 5. Provides advanced cache and content distribution functions to reduce backend pressure and improve response speed; 6. Automatic configuration updates can be achieved through APIs to adapt to Kubernetes or automatic scaling environments; 7. Includes

The method to enable HSTS is to configure the Strict-Transport-Security response header in the HTTPS website. The specific operations are: 1.Nginx adds the add_header directive in the server block; 2.Apache adds the header directive in the configuration file or .htaccess; 3.IIS adds customHeaders in web.config; it is necessary to ensure that the site fully supports HTTPS, parameters include max-age (valid period), includeSubDomains (subdomains are effective), preload (preload list), and the prereload is the prerequisite for submitting to the HSTSPreload list.

A/B testing can be implemented through Nginx's split_clients module, which distributes traffic proportionally to different groups based on user attribute hashing. The specific steps are as follows: 1. Use the split_clients instruction to define the grouping and proportions in the http block, such as 50%A and 50%B; 2. Use variables such as $cookie_jsessionid, $remote_addr or $arg_uid as hash keys to ensure that the same user is continuously allocated to the same group; 3. Use the corresponding backend through if conditions in the server or location block; 4. Record the grouping information through a custom log format to analyze the effect; 5. Track the performance of each group with the monitoring tool
