Example of running a PHP script in CLI with custom memory limit:
php -d memory_limit=64M my_script.php
Source: http://cristian-radulescu.ro/article/php-cli-increase-memory-limit.html
Example of running a PHP script in CLI with custom memory limit:
php -d memory_limit=64M my_script.php
Source: http://cristian-radulescu.ro/article/php-cli-increase-memory-limit.html
yum install yum-plugin-replace yum replace php --replace-with php56u # Make sure to Enable the repository /etc/yum.repos.d/ius-archive.repo -> Enabled = 1
Add this into your vhost (making sure to match the Directory directive with the correct path):
<Directory /var/www/vhosts/mydomain.com/uploads> SetHandler none SetHandler default-handler Options -ExecCGI php_flag engine off RemoveHandler .cgi .php .php3 .php4 .php5 .phtml .pl .py .pyc .pyo .sh </Directory>
For mod_php, you can just add this to your Apache LogFormat:
%{mod_php_memory_usage}n
Might be best to add it right at the end, so as not to break any log parsing.
Credit: http://tech.superhappykittymeow.com/?p=220
For PHP-FPM (i.e. anyone with NginX or anyone with one of our optimised Magento setups.), put the following into your FPM pool config file, probably here:
– /etc/php-fpm.d/website.conf (RHEL/CentOS)
– /etc/php5-fpm/pools.d/website.conf (Ubuntu)
access.log = /var/log/php-fpm/domain.com-access.log access.format = "%p %{HTTP_X_FORWARDED_FOR}e - %u %t \"%m %{REQUEST_URI}e\" %s %f %{mili}d %{kilo}M %C%% \"%{HTTP_USER_AGENT}e\""
Notes:
• Permissions matter. Check that the User and/or Group (of THIS fpm pool) can write to /var/log/php-fpm (or /php5-fpm, whatever)
• %{HTTP_X_FORWARDED_FOR}e is there because I was behind a Load Balancer, Varnish and/or other reverse proxy.
• %{REQUEST_URI}e\ is there because this CMS (like most, now), rewrite everything to index.php. I want to know the original request, not just the script name.
• %{kilo}M %C – These are the kickers: Memory usage and CPU PerformanceOptimized usage per request. Ker-pow. Pick your favourite awk/sort one-liner to weed out the heavy hitters.
You can log pretty much anything: any arbitrary header, (like REQUEST_URI), and you should find all the options in the comments under the default “www.conf” packaged config file.
Given that this starts with the PID, it should also help track down any segfaults you see in /var/log/messages / kern.log.
Oh, and while you’re at it, PHP-FPM has a slow log you can enable.
Oh, and don’t forget your old friend logrotate.
Counting the average PHP-FPM process memory usage
Looking at the ‘ps’ output isn’t always accurate because of shared memory. Here’s a one-liner to count up my FPM pool, where “www” is in name of the FPM pool:
for pid in $(ps aux | grep fpm | grep "pool www" | awk '{print $2}'); do pmap -d $pid | tail -1 ; done | sed 's/K//' | awk '{sum+=$4} END {print sum/NR/1024}’
Thanks to Dan Farmer for his original Apache memory script, which made me think to do this.
Divide and conquer
Do make sure that multiple websites on the same server are using their own FPM pools, that way it should be much easier to see what’s what.
But you can also separate by URL. I’ve done this with the M-word, but the theory should stand for any CMS where the tasks performed under the admin section will be more intensive than normal front-end user traffic.
The following should work with WordPress, assuming the path is /wp-admin . Magento users can (and should) change their admin path from the default, so watch out for that (usually configured in local.xml)
Or anything really – maybe that “importvideo” script is especially memory-intensive and you don’t want to allow it to bloat up all your processes.
1. Set up a separate PHP-FPM pool.
– give it a different name, different log file names, and listen on a different socket/port.
– Perhaps with a much higher memory_limit
– Perhaps with many fewer pm.max_children (ask customer how many humans actually use the “backend”). You might only need 5 or so.
– Perhaps it needs a longer max_execution_time …. you get the idea.
2. Send your “admin” traffic there.
Here’s how you might go about it. These are excerpts and ideas, rather than solid copy-paste config.
Nginx: something like this:
upstream backend { server unix:/var/run/php5-domain.com.sock; } upstream backend-admin { server unix:/var/run/php5-domain.com-admin.sock; } map "URL:$request_uri." $fcgi_pass { default backend; ~URL:.*admin.* backend-admin; }
… then in your ‘server{}’ bit, use the variable for fastcgi_pass:
fastcgi_pass $fcgi_pass;
Apache: something like this…
# Normal backend alias, corresponds with FastCGIExgternalServer Alias /php.fcgi /dev/shm/domain-php.fcgi <Location ~ admin> # Override Action for “admin” URLs Action application/x-httpd-php /domain-admin.fcgi </Location> Alias /domain-admin.fcgi /dev/shm/domain-admin-php.fcgi
3. You can probably now LOWER the memory_limit for your “main” pool
– if the rest of the website doesn’t use much memory. Now bask in the memory you just saved for the whole application server.
4. Bonus: NewRelic app separation
Don’t let all that heavy Admin work interfere with your nice / renice appdex statistics – we know (and expect) your backend dashboard to be slower.
Put this in the FPM pool config:
php_value[newrelic.appname] = "www.domain.com Admin"
Credits: https://willparsons.tech/
If you are seeing the very-default-looking Magento page saying “There has been an error processing your request”, then look in here:
ls -lart <DOCROOT>/var/report/ | tail
The stack trace will be in the latest file (there might be a lot), and should highlight what broke.
Maybe the error was in a database library, or a Redis library…see next step if that’s the case.
General errors, often non-fatal, are in <DOCROOT>/var/log/exception.log
Other module-specific logs will be in the same log/ directory, for example SagePay.
NB: check /tmp/magento/var/ .
If the directories in the DocumentRoot are not writable (or weren’t in the past), Magento will use /tmp/magento/var and you’ll find the logs/reports/cache in there.
First, find the local.xml. It should be under <DOCROOT>/app/etc/local.xml or possibly a subdirectory like <DOCROOT>/store/app/etc/local.xml
From that, take note of the database credentials, the <session_save>, and the <cache><backend>. If there’s no <cache> section, then you are using filesystem so it won’t be memcache or redis.
– Can you connect to that database from this server? authenticate? or is it at max-connections?
– To test memcache, “telnet host 11211” and type “STATS“.
– To test Redis, “telnet host 6379” and type “INFO”.
You could also use:
redis-cli -s /tmp/redis.sock -a PasswordIfThereIsOne info
If you can’t connect to those from the web server, check that the relevant services are started, pay close attention to the port numbers, and make sure any firewalls allow the connection.
If the memcache/redis info shows evictions > 0, then it’s probably filled up at some point and restarting that service might get you out of the water.
ls -la /etc/init.d/mem* /etc/init.d/redis*
3. Check the normal places – sometimes it’s nothing to do with Magento!
– Apache or nginx logs
– Is Apache just at MaxClients?
– PHP-FPM max_children?
ps aux | grep fpm | grep -v root | awk '{print $11, $12, $13}' | sort | uniq -c
– Is your error really just a timeout, because the server’s too busy?
– Did OOM-killer break something?
grep oom /var/log/messages /var/log/kern.log
– Has a developer been caught out by apc.stat=0 (or opcache.validate_timestamp=0) ?
Credits: https://willparsons.tech/
cat > test.php <<EOF <?php echo "<h1>This is a test page</h1>"; ?> EOF
<?php // Show all information, defaults to INFO_ALL phpinfo(); ?>
(command line)
php -r "phpinfo();"
cat > test.php <<EOF <?php echo '<br><br>This website is running as: <b>' . exec('/usr/bin/whoami') . '</b>'; echo '<br><br>From path: <b><i>' . getcwd() . '</i></b><br><br>'; echo '<br><b><font size="5" color="red">DELETE THIS ONCE TESTED!</font></b>' . "\n"; ?>
If your php application requires sessions and it’s hosted on a scaled high available infrastructure, it’s required to have these sessions stored on a decentralised and HA platform as well, in order to avoid to rely on session persistent options on the load balancer or another Cloud Server.
Redis as a Service is a nice fit for this purpose.
Here an example using Rackspace Object Rocket http://www.rackspace.co.uk/objectrocket/redis
To achieve this it’s required to install the right package.
In Centos/RHEL, there is the IUS package available:
yum install php56u-pecl-redis
After that, the php.ini should be changed like this:
session.save_handler = redis session.save_path = "tcp://REDISOBJECTROCKETFQDN:PORT?auth=REDISPASSWORD"
To increase performance and reduce the “noise” for repetitive DNS queries (especially in case of SaaS which uses FQDN instead of an IP) it is also recommended to install nscd to cache the DNS queries.
Putting PHP session files in a single pool can be done using memcached which has the benefit of being faster to look up and no longer relying on a slow filesystem to save session files to (memcached stores its data in memory) but you need to have your developers check that your code won’t have any issues using this over session files.
Install the package and set the configuration file:
PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="128" OPTIONS="-l 127.0.0.1"
Make sure that memcache php plugin is there – or install it
Change this in php.ini
session.save_handler = memcached session.save_path = "tcp://127.0.0.1:11211?persistent=1&weight=1&timeout=1&retry_interval=15"
Check memcache stats
echo "stats" | nc 127.0.0.1 11211 echo "stats" | nc -w 1 127.0.0.1 11211 | awk '$2 == "bytes" { print $2" "$3 }'