Discussion:
[squid-users] socket failure: (24) Too many open files
Cherukuri, Naresh
2018-10-05 15:57:39 UTC
Permalink
Hello Squid Group,


I am using squid 3.5.20 as a proxy server. I Increased the memory from 12 GB to 32 GB and Max file descriptors from "4096" to "8192" and deployed this server into production on 09/26/2018".

I don't have any problem from the past 10 days everything working as expected till today. Now after 10 days for the first time, I got following errors on cache log today. Can someone advise/suggest any ideas here?


Error(s) in /var/log/squid/cache.log: 2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files

2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files

2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files

2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files

2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files

2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files

2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files

2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files

2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files


[n*****@squidprod ~]$ free -m
total used free shared buffers cached
Mem: 32004 15907 16097 138 295 14132
-/+ buffers/cache: 1480 30524
Swap: 24999 0 24999

Thanks,
Naresh
Antony Stone
2018-10-05 16:34:46 UTC
Permalink
Post by Cherukuri, Naresh
Hello Squid Group,
I am using squid 3.5.20 as a proxy server.
On what Operating System?
Post by Cherukuri, Naresh
I Increased the memory from 12 GB to 32 GB
You mean you put more memory into the server, or you re-configured something in
software (if so, what)?
Post by Cherukuri, Naresh
and Max file descriptors from "4096" to "8192"
How did you do that?

What does "ulimit -a" tell yoou?
Post by Cherukuri, Naresh
and deployed this server into production on 09/26/2018".
I don't have any problem from the past 10 days everything working as
expected till today.
How many users do you have, what sort of number of connections per
second/minute/hour (whatever is convenient for you to express) do you have
going through this machine?
Post by Cherukuri, Naresh
Now after 10 days for the first time, I got following errors on cache log
today. Can someone advise/suggest any ideas here?
socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
What do "cat /proc/sys/fs/file-max" and "cat /proc/sys/fs/file-nr" tell you?
Post by Cherukuri, Naresh
total used free shared buffers cached
Mem: 32004 15907 16097 138 295 14132
-/+ buffers/cache: 1480 30524
Swap: 24999 0 24999
However that has nothing to do with files or descriptors.

What does domething ike "ulimit -a" or "lsof | wc" tell you?


Antony.
--
Angela Merkel arrives at Paris airport.
"Nationality?" asks the immigration officer.
"German," she replies.
"Occupation?"
"No, just here for a summit conference."

Please reply to the list;
please *don't* CC me.
Cherukuri, Naresh
2018-10-05 19:51:36 UTC
Permalink
Thanks for quick turnover!

Please find following details that you requested.
Post by Antony Stone
On what Operating System?
Operating system : Red Hat 7.0
Post by Antony Stone
I Increased the memory from 12 GB to 32 GB
You mean you put more memory into the server, or you re-configured something in
software (if so, what)?

We put more memory into server. Before the server has 12 GB and we increase to 32GB.
Post by Antony Stone
and Max file descriptors from "4096" to "8192"
[***@squidprod ~]# cat /etc/squid/squid.conf | grep "max_filedescriptors"
max_filedescriptors 8192
Post by Antony Stone
ulimit -a value
[***@squidprod ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 255941
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 8192
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 255941
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

[***@squidprod ~]# lsof | wc -l
10875

Thanks,
Naresh
-----Original Message-----
From: squid-users [mailto:squid-users-***@lists.squid-cache.org] On Behalf Of Antony Stone
Sent: Friday, October 5, 2018 12:35 PM
To: squid-***@lists.squid-cache.org
Subject: Re: [squid-users] socket failure: (24) Too many open files
Post by Antony Stone
Hello Squid Group,
I am using squid 3.5.20 as a proxy server.
On what Operating System?
Post by Antony Stone
I Increased the memory from 12 GB to 32 GB
You mean you put more memory into the server, or you re-configured something in
software (if so, what)?
Post by Antony Stone
and Max file descriptors from "4096" to "8192"
How did you do that?

What does "ulimit -a" tell yoou?
Post by Antony Stone
and deployed this server into production on 09/26/2018".
I don't have any problem from the past 10 days everything working as
expected till today.
How many users do you have, what sort of number of connections per
second/minute/hour (whatever is convenient for you to express) do you have
going through this machine?
Post by Antony Stone
Now after 10 days for the first time, I got following errors on cache log
today. Can someone advise/suggest any ideas here?
socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
What do "cat /proc/sys/fs/file-max" and "cat /proc/sys/fs/file-nr" tell you?
Post by Antony Stone
total used free shared buffers cached
Mem: 32004 15907 16097 138 295 14132
-/+ buffers/cache: 1480 30524
Swap: 24999 0 24999
However that has nothing to do with files or descriptors.

What does domething ike "ulimit -a" or "lsof | wc" tell you?


Antony.

--
Angela Merkel arrives at Paris airport.
"Nationality?" asks the immigration officer.
"German," she replies.
"Occupation?"
"No, just here for a summit conference."

Please reply to the list;
please *don't* CC me.
_______________________________________________
squid-users mailing list
squid-***@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Cherukuri, Naresh
2018-10-05 20:08:16 UTC
Permalink
Antony,


For just squid process open files count.

[***@squidprod ~]# lsof -c squid | wc -l
4385

Thanks,
Naresh

-----Original Message-----
From: Cherukuri, Naresh
Sent: Friday, October 5, 2018 3:52 PM
To: 'Antony Stone'; squid-***@lists.squid-cache.org
Subject: RE: [squid-users] socket failure: (24) Too many open files

Thanks for quick turnover!

Please find following details that you requested.
Post by Antony Stone
On what Operating System?
Operating system : Red Hat 7.0
Post by Antony Stone
I Increased the memory from 12 GB to 32 GB
You mean you put more memory into the server, or you re-configured something in
software (if so, what)?

We put more memory into server. Before the server has 12 GB and we increase to 32GB.
Post by Antony Stone
and Max file descriptors from "4096" to "8192"
[***@squidprod ~]# cat /etc/squid/squid.conf | grep "max_filedescriptors"
max_filedescriptors 8192
Post by Antony Stone
ulimit -a value
[***@squidprod ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 255941
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 8192
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 255941
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

[***@squidprod ~]# lsof | wc -l
10875

Thanks,
Naresh
-----Original Message-----
From: squid-users [mailto:squid-users-***@lists.squid-cache.org] On Behalf Of Antony Stone
Sent: Friday, October 5, 2018 12:35 PM
To: squid-***@lists.squid-cache.org
Subject: Re: [squid-users] socket failure: (24) Too many open files
Post by Antony Stone
Hello Squid Group,
I am using squid 3.5.20 as a proxy server.
On what Operating System?
Post by Antony Stone
I Increased the memory from 12 GB to 32 GB
You mean you put more memory into the server, or you re-configured something in
software (if so, what)?
Post by Antony Stone
and Max file descriptors from "4096" to "8192"
How did you do that?

What does "ulimit -a" tell yoou?
Post by Antony Stone
and deployed this server into production on 09/26/2018".
I don't have any problem from the past 10 days everything working as
expected till today.
How many users do you have, what sort of number of connections per
second/minute/hour (whatever is convenient for you to express) do you have
going through this machine?
Post by Antony Stone
Now after 10 days for the first time, I got following errors on cache log
today. Can someone advise/suggest any ideas here?
socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
What do "cat /proc/sys/fs/file-max" and "cat /proc/sys/fs/file-nr" tell you?
Post by Antony Stone
total used free shared buffers cached
Mem: 32004 15907 16097 138 295 14132
-/+ buffers/cache: 1480 30524
Swap: 24999 0 24999
However that has nothing to do with files or descriptors.

What does domething ike "ulimit -a" or "lsof | wc" tell you?


Antony.

--
Angela Merkel arrives at Paris airport.
"Nationality?" asks the immigration officer.
"German," she replies.
"Occupation?"
"No, just here for a summit conference."

Please reply to the list;
please *don't* CC me.
_______________________________________________
squid-users mailing list
squid-***@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Antony Stone
2018-10-05 21:06:04 UTC
Permalink
Post by Cherukuri, Naresh
For just squid process open files count.
4385
Squid is not the only thing running on this machine...
Post by Cherukuri, Naresh
10875
But you seem to have sufficient file descriptors configured *in Squid* (but maybe
Post by Cherukuri, Naresh
max_filedescriptors 8192
So, Squid can have 8192 FDs.
Post by Cherukuri, Naresh
ulimit -a value
open files (-n) 8192
...and the system will provide 8192 FDs for every process combined...
Post by Cherukuri, Naresh
10875
I reckon that may well be your problem - you have a system-wide limit of 8192
file descriptors, and yet you are trying to use 10875 open files (this will
include local pipes, sockets, etc, so it's understandable that it's higher,
but it indicates you're going over the limit).
Post by Cherukuri, Naresh
Thanks,
Naresh
And, as I asked previously:

What do "cat /proc/sys/fs/file-max" and "cat /proc/sys/fs/file-nr" tell you?

How many users do you have, what sort of number of connections per
second/minute/hour (whatever is convenient for you to express) do you have
going through this machine?


Antony.
--
A good conversation is like a miniskirt;
short enought to retain interest,
but long enough to cover the subject.

- Celeste Headlee


Please reply to the list;
please *don't* CC me.
Cherukuri, Naresh
2018-10-08 13:14:20 UTC
Permalink
Yes, I have also splunk running on this machine.
Post by Antony Stone
What do "cat /proc/sys/fs/file-max" and "cat /proc/sys/fs/file-nr" tell you?
[***@squidprod ~]# cat /proc/sys/fs/file-nr
4736 0 3256314
[***@squidprod ~]# cat /proc/sys/fs/file-max
3256314
Post by Antony Stone
How many users do you have, what sort of number of connections per
second/minute/hour (whatever is convenient for you to express) do you have
going through this machine?

I am not sure, but I would say more than 3000 connections per minute.

Thanks,
Naresh

-----Original Message-----
From: squid-users [mailto:squid-users-***@lists.squid-cache.org] On Behalf Of Antony Stone
Sent: Friday, October 5, 2018 5:06 PM
To: squid-***@lists.squid-cache.org
Subject: Re: [squid-users] socket failure: (24) Too many open files
Post by Antony Stone
For just squid process open files count.
4385
Squid is not the only thing running on this machine...
Post by Antony Stone
10875
But you seem to have sufficient file descriptors configured *in Squid* (but maybe
Post by Antony Stone
max_filedescriptors 8192
So, Squid can have 8192 FDs.
Post by Antony Stone
ulimit -a value
open files (-n) 8192
...and the system will provide 8192 FDs for every process combined...
Post by Antony Stone
10875
I reckon that may well be your problem - you have a system-wide limit of 8192
file descriptors, and yet you are trying to use 10875 open files (this will
include local pipes, sockets, etc, so it's understandable that it's higher,
but it indicates you're going over the limit).
Post by Antony Stone
Thanks,
Naresh
And, as I asked previously:

What do "cat /proc/sys/fs/file-max" and "cat /proc/sys/fs/file-nr" tell you?

How many users do you have, what sort of number of connections per
second/minute/hour (whatever is convenient for you to express) do you have
going through this machine?


Antony.

--
A good conversation is like a miniskirt;
short enought to retain interest,
but long enough to cover the subject.

- Celeste Headlee


Please reply to the list;
please *don't* CC me.
_______________________________________________
squid-users mailing list
squid-***@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Amos Jeffries
2018-10-09 00:55:04 UTC
Permalink
Post by Cherukuri, Naresh
Yes, I have also splunk running on this machine.
Post by Antony Stone
What do "cat /proc/sys/fs/file-max" and "cat /proc/sys/fs/file-nr" tell you?
4736 0 3256314
3256314
Post by Antony Stone
How many users do you have, what sort of number of connections per
second/minute/hour (whatever is convenient for you to express) do you have
going through this machine?
I am not sure, but I would say more than 3000 connections per minute.
The Squid "info" manager report (squidclient mgr:info) contains the
req/min details. Since you are running out of FD the number Squid
reports as being used will be a lower bound on how many it could be
handling if there were enough FD.

Amos
Eliezer Croitoru
2018-10-08 19:18:04 UTC
Permalink
I recommend Squid upgrade if possible due to couple bugs as something..
Try to bump the server to 32k open file descriptors and see what
happens.
Depends on the load on the server it might need at peek times more then
8k.
The cache manage info page should give you couple technical details on
the status of the service.
It can also give some statistics which might shed some light on the
scenario.
Others might be able to give you some more detail on the relevant cache
manager pages.

https://wiki.squid-cache.org/Features/CacheManager

Eliezer
Post by Cherukuri, Naresh
Hello Squid Group,
I am using squid 3.5.20 as a proxy server. I Increased the memory
from 12 GB to 32 GB and Max file descriptors from "4096" to "8192" and
deployed this server into production on 09/26/2018".
I don't have any problem from the past 10 days everything working as
expected till today. Now after 10 days for the first time, I got
following errors on cache log today. Can someone advise/suggest any
ideas here?
Error(s) in /var/log/squid/cache.log: 2018/10/05 11:03:59 kid1|
comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
total used free shared buffers
cached
Mem: 32004 15907 16097 138 295
14132
-/+ buffers/cache: 1480 30524
Swap: 24999 0 24999
Thanks,
Naresh
_______________________________________________
squid-users mailing list
http://lists.squid-cache.org/listinfo/squid-users
--
----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: ***@ngtech.co.il
Cherukuri, Naresh
2018-10-09 14:49:09 UTC
Permalink
Thank you for quickturnover!

I bumped FD from 8k to 16k.

Thanks,
Naresh


-----Original Message-----
From: Eliezer Croitoru [mailto:***@ngtech.co.il]
Sent: Monday, October 8, 2018 3:18 PM
To: Cherukuri, Naresh
Cc: 'squid-***@lists.squid-cache.org'
Subject: Re: [squid-users] socket failure: (24) Too many open files

I recommend Squid upgrade if possible due to couple bugs as something..
Try to bump the server to 32k open file descriptors and see what
happens.
Depends on the load on the server it might need at peek times more then
8k.
The cache manage info page should give you couple technical details on
the status of the service.
It can also give some statistics which might shed some light on the
scenario.
Others might be able to give you some more detail on the relevant cache
manager pages.

https://wiki.squid-cache.org/Features/CacheManager

Eliezer
Post by Cherukuri, Naresh
Hello Squid Group,
I am using squid 3.5.20 as a proxy server. I Increased the memory
from 12 GB to 32 GB and Max file descriptors from "4096" to "8192" and
deployed this server into production on 09/26/2018".
I don't have any problem from the past 10 days everything working as
expected till today. Now after 10 days for the first time, I got
following errors on cache log today. Can someone advise/suggest any
ideas here?
Error(s) in /var/log/squid/cache.log: 2018/10/05 11:03:59 kid1|
comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
2018/10/05 11:03:59 kid1| comm_open: socket failure: (24) Too many open files
total used free shared buffers
cached
Mem: 32004 15907 16097 138 295
14132
-/+ buffers/cache: 1480 30524
Swap: 24999 0 24999
Thanks,
Naresh
_______________________________________________
squid-users mailing list
http://lists.squid-cache.org/listinfo/squid-users
--
----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: ***@ngtech.co.il
Continue reading on narkive:
Loading...