— There is a server that converts video, and it runs tuyev khuchua php scripts, otira periodically or in a compartment a lot of memory. Because of what the system - necessary services fall out of memory (sshd, httpd, nginx, postgresql, monit, syslog). A couple of times was panic'and
And now attention a question:
— What to control? How to limit the number of allowed memory per process? How to make it so that if you exceed the memory limit, fell the process of eating and not system services.
cgroups (control groups) is a Linux kernel feature to limit, account and isolate resource usage (CPU, memory, disk I/O, etc.) of process groups.
Schuyler.Kertzmann97 answered on October 8th 19 at 01:22
And can still help nice.
Ila.Mayert answered on October 8th 19 at 01:24
Tell me is there a tool that manage all of this? ulimit is very narrow, and are designed for processes created by the user, not the system.
For example *.php — 200mb of memory despite the fact that the 200mb already installed in php.ini.
And so on.
nikita.Stracke answered on October 8th 19 at 01:26
You can use top with-b switch and parse the output of shell/python/... script, and then to do kill the right process.
Art.Lebsack5 answered on October 8th 19 at 01:28
There is such a thing as oom adjust, it controls the order in which processes will be killed. Unfortunately the default mechanism in most cases works analogichno by killing a system service, a couple of megabytes, instead of killing the process that had eaten up most of memory.
In upstart and systemd, there are some possible mechanisms to use this option, but I think that is more convenient to use one of the available on the Internet of scripts that manually set the priorities of processes in the system based on the configuration.