cancel
Showing results for 
Search instead for 
Did you mean: 

Too many open files

carfield1
New Contributor
Hi all, just found out that it is possible to get "Too many open files" error by running a select statement on a HDB across a lot splay tables

It look like current KDB will close the file handler only after the select statement return, I guess it would be better to close the file handler after KDB load the data file by file?
5 REPLIES 5

charlie
New Contributor II
New Contributor II
I expect you are encountering this when using compressed files, and the select has no row constraint?
We can easily remove this error, however it would be at the cost of using much more address space - which would then make compressed files practically unusable in the 32bit version.

There's no limit within kdb+ itself which forces this error - it's due to restrictions from the environment only, and you should be able to increase that with e.g.

ulimit -n 4096

hth,
Charlie

Hello Charles,

Do you mean that setting ulimit -n to a higher value can increase a number of compressed files that can be open simultaneously? I thought 4096 was hardcoded:

http://code.kx.com/wiki/Cookbook/FileCompression:
> Q) Is there a limit on the number of compressed files that can be open simultaneously?
> A) Yes, currently the limit is 4096 files. There is no practical internal limit on the number of uncompressed files.

I was never able to open more that 4096 compressed files at once no matter how high ulimit was. Or were you talking about a situation where low ulimit prevented kdb from opening uncompressed files?

Thank you,
Igor

charlie
New Contributor II
New Contributor II
Hi Igor,

I'm referring to compressed files only, and this scenario should arise very rarely assuming your schema is lots of rows per partition rather than lots of partitions with small number of rows per partition.

It looks like we need to update that documentation. 🙂

In v3.1, release 2013.02.21, we removed the limit of 4096 open compressed files; number of open files is now just the limit from the OS.

$ rlwrap q
KDB+ 3.4

q).z.zd:17 2 6;{(hsym `$string x)set 1000#x}each til 10000
`:0`:1`:2`:3`:4`:5`:6`:7`:8`:9`:10`:11`:12`:13`:14`:15`:16`:17`:18`:19`:20`:2..
q)v:get each hsym key`:.
q)count v
10000
q)system"ulimit -n"
"32768"
q)-21!`:0
compressedLength  | 96
uncompressedLength| 8016
algorithm         | 2i
logicalBlockSize  | 17i
zipLevel          | 6i

upping the ulimit may require additional changes, e.g. see

Thank you Charles,

As far as I remember I got "Too many files" when I tried to run .Q.chk[] on a compressed database with very wide tables, one of them had 2500 columns or so. But that was kdb 2.8 which explains why upping ulimit didn't help. I had to create a slightly modified version of .Q.chk by  to solve the problem.

Best regards,
Igor

I see, thanks