pcntl_fork() - pcntl进程控制
pcntl_fork()
(PHP 4 >= 4.1.0, PHP 5, PHP 7)
在当前进程当前位置产生分支(子进程)。译注:fork是创建了一个子进程,父进程和子进程都从fork的位置开始向下继续执行,不同的是父进程执行过程中,得到的fork返回值为子进程号,而子进程得到的是0。
说明
pcntl_fork(void): intpcntl_fork()函数创建一个子进程,这个子进程仅PID(进程号)和PPID(父进程号)与其父进程不同。fork怎样在您的系统工作的详细信息请查阅您的系统的fork(2)手册。
返回值
成功时,在父进程执行线程内返回产生的子进程的PID,在子进程执行线程内返回0。失败时,在父进程上下文返回-1,不会创建子进程,并且会引发一个PHP错误。
范例
pcntl_fork()示例
参见
pcntl_waitpid()
等待或返回fork的子进程状态pcntl_signal()
安装一个信号处理器
"Fatal Error" has always been the bane of my world because there is no way to capture and handle the condition in PHP. My team builds almost everything in PHP in order to leverage our core library of code, so it was of the essence to find a solution for this problem of scripts bombing unrecoverably and us never knowing about it. One of our background automation systems creates a "task queue" of sorts and for each task in the queue, a PHP module is include()ed to handle the task. Sometimes however a poorly behaved module will nuke with a Fatal Error and take out the parent script with it. I decided to try to use pcntl_fork() to isolate the task module from the parent code, and it seems to work: a Fatal Error generated within the module makes the child task bomb, and the waiting parent can simply catch the return code from the child and track/alert us to the problem as needed. Naturally something similar could be done if I wanted to simply exec() the module and check the output, but then I would not have the benefit of the stateful environment that the parent script has so carefully prepared. This allows me to keep the child process within the context of the parent's running environment and not suffer the consequences of Fatal Errors stopping the task queue from continuing to process. Here is fork_n_wait.php for your amusement: Which outputs: php -q fork_n_wait.php FORK: Child #1 preparing to nuke... PHP Fatal error: Call to undefined function generate_fatal_error() in ~fork_n_wait.php on line 16 FORK: Parent, letting the child run amok... FORK: Child #2 preparing to nuke... PHP Fatal error: Call to undefined function generate_fatal_error() in ~/fork_n_wait.php on line 16 FORK: Parent, letting the child run amok... FORK: Child #3 preparing to nuke... PHP Fatal error: Call to undefined function generate_fatal_error() in ~/fork_n_wait.php on line 16 FORK: Parent, letting the child run amok... FORK: Child #4 preparing to nuke... PHP Fatal error: Call to undefined function generate_fatal_error() in ~/fork_n_wait.php on line 16 FORK: Parent, letting the child run amok... Done! :^)
Workaround to pcntl_fork() not being usable when PHP is run as an Apache module function background_job($program, $args) { # The following doesn't work when running PHP as an apache module /* $pid = pcntl_fork(); pcntl_signal(SIGCHLD, SIG_IGN); if ($pid == 0) { posix_setsid(); pcntl_exec($program, $args, $_ENV); exit(0); } */ # Workaround $args = join(' ', array_map('escapeshellarg', $args)); exec("$program $args 2>/dev/null >&- /dev/null &"); }
I just thought of contributing to this awesome community and hope this can be of use to someone. Although PHP provides threaded options, and multi curl handles that run in parallel, I managed to bash out a solution to run each function as it's own process for non-threaded versions of PHP. Usage: #!/usr/bin/php Usage: php -f /path/to/file #!/usr/bin/php If you'd like to get the results back from a webpage, use exec(). Eg: echo exec('php -f /path/to/file'); Continue hacking! :)
Its been easy to fork process with pcntl_fork.. but how can we control or process further once all child processes gets completed.. here is the way we can do that...
The reason for the MySQL "Lost Connection during query" issue when forking is the fact that the child process inherits the parent's database connection. When the child exits, the connection is closed. If the parent is performing a query at this very moment, it is doing it on an already closed connection, hence the error. An easy way to avoid this is to create a new database connection in parent immediately after forking. Don't forget to force a new connection by passing true in the 4th argument of mysql_connect(): This way, the child will inherit the old connection, will work on it and will close upon exit. The parent won't care, because it will open a new connection for itself immediately after forking. Hope this helps.
There are quite a few questions regarding how file descriptors get handled when processes are forked. Remember that fork() makes a copy of the program, which means all descriptors are copied. Unfortunately, this is a rather bad situation for a PHP program because most descriptors are handled by PHP or a PHP Extension internally. The simple, and probably "proper" way to solve this issue is to fork before hand, there really should be no need to fork at many different points among a program, you would simply fork, and then delegate the work. Use a master/worker hierarchy. For example, if you need to have many processes that use a MySQL Connection, just fork before the connection is made, that way each child has it´s own connection to mysql that it, and it alone, manages. With careful and correct usage, fork() can be an extremely powerful tool. --Please remember to take proper care of your children.
If you want to execute some code after your php page has been returned to the user. Try something like this -
It was driving me crazy that the script was killed couple of hours after I logged out, even I started it as: php server.php >& logfile.txt looks like PHP somehow interact with standard input, even I do not used it. Solution was to start it with nohup: nohup php server.php >& logfile.txt or to do demonize / run as demon (e.g. fork() and close file descriptors)
Fork in foreach:
you should be _very_ careful with using fork in scripts beyond academic examples, or rather just avoid it alltogether, unless you are very aware of it's limitations. the problem is that it just forks the whole php process, including not only the state of the script, but also the internal state of any extensions loaded. this means that all memory is copied, but all file descriptors are shared among the parent and child processes. and that can cause major havoc if some extension internally maintains file descriptors. the primary example is ofcourse mysql, but this could be any extensions that maintains open files or network sockets. also, just reopening your connection in the parent or child isn't a safe method, because when the old connection resource is destroyed, the extension might not just close it, but for example send a request to the server to log off, making the connection unusable. this happens with mysql for example, when php exits - in the following script the query will always fail with "MySQL server has gone away": (it was suggested that processes kill themselves with SIGKILL to avoid any cleanup on shutdown) (the only save way would be to close all connections and reopen them after the fork, and even that might not be possible if an extension keeps one open internally) for a nice demonstration of the havoc fork can create, try the below script. it opens a mysql connection, then forks, and runs queries from both parent and child, verifying that it receives the correct result. run it (on the cli preferably) a few times, and you will find various possible results: - very often is just hangs and doesn't output anything anymore - also very often, the server closes the connection, probably because it receives interleaved requests it can't process. - sometimes one process gets the result of the OTHER processes' query! (because both send their queries down the same socket, and it's pure luck who gets the reply)
Using pcntl_fork() can be a little tricky in some situations. For fast jobs, a child can finish processing before the parent process has executed some code related to the launching of the process. The parent can receive a signal before it's ready to handle the child process' status. To handle this scenario, I add an id to a "queue" of processes in the signal handler that need to be cleaned up if the parent process is not yet ready to handle them. I am including a stripped down version of a job daemon that should get a person on the right track. Watch out that you don't spawn too many processes though as this creates its own problems.
I was able to get around the problem of not being able to run fork and exec from Apache php. I got around this by calling the system 'at' command on Linux. "at run something now". and you have to set atrun -s in a crontab file (to run every minute) to insure that things get kicked off quickly even if there is a heavy load on the machine. If you're the only one running batch jobs on a linux box, this works.
When using fork to run multiple children processes on a single job queue using mysql, I used mysql_affected_rows() to prevent collisions between workers: First I find a "free" job: SELECT job_id FROM queue WHERE status="free" Then I update the queue: UPDATE queue SET worker_id={$worker_id} WHERE job_id={$job_id} Then I see if the row was changed
鹏仔微信 15129739599 鹏仔QQ344225443 鹏仔前端 pjxi.com 共享博客 sharedbk.com
免责声明:我们致力于保护作者版权,注重分享,当前被刊用文章因无法核实真实出处,未能及时与作者取得联系,或有版权异议的,请联系管理员,我们会立即处理! 部分文章是来自自研大数据AI进行生成,内容摘自(百度百科,百度知道,头条百科,中国民法典,刑法,牛津词典,新华词典,汉语词典,国家院校,科普平台)等数据,内容仅供学习参考,不准确地方联系删除处理!邮箱:344225443@qq.com)
图片声明:本站部分配图来自网络。本站只作为美观性配图使用,无任何非法侵犯第三方意图,一切解释权归图片著作权方,本站不承担任何责任。如有恶意碰瓷者,必当奉陪到底严惩不贷!
内容声明:本文中引用的各种信息及资料(包括但不限于文字、数据、图表及超链接等)均来源于该信息及资料的相关主体(包括但不限于公司、媒体、协会等机构)的官方网站或公开发表的信息。部分内容参考包括:(百度百科,百度知道,头条百科,中国民法典,刑法,牛津词典,新华词典,汉语词典,国家院校,科普平台)等数据,内容仅供参考使用,不准确地方联系删除处理!本站为非盈利性质站点,本着为中国教育事业出一份力,发布内容不收取任何费用也不接任何广告!)