First of all, I am trying the 32bit trial version and no hope to upgrade to the 64bit recently. What I would like to ask is what is the limited file size for this trial version to load? I have a 200MB tick data csv file and kdb handled it fine within dozen seconds. But when I try to load a 1GB tick data file, the kdb started processing and quits after increasing the used memory from 900MB to 1.9G in windows. But I have 3 GB memory and why the kdb quits before reaching the limitation of the computer? I read on kx.com that the they recommend you prepare 4 times size of memory over the size of data you would like to handle. Is this the reason? It looks like kdb tried to handle the 1gb file and gave up. Furthermore, I tried to let the kdb to handle a 2gb file, but this time kdb even does not start and gives me a complaint that the file is too large.
Second question about kdb memory management: My plan is to backtest some strategy on currency tick data. The basic idea is to back test the strategy on a moving time window. Simply put, we query tick data over 1 week, and the size of data is about 20 MB which should be fine and we store it as a table called "TICKDATA" in kdb in memory. And now we finish the backtest on this 1 week data and we expect to load next week tick data to the table we just used again to overwrite the "TICKDATA" and so on. My question is that since there is certainly a limitation on the usage of memory as i described in first question, I just wonder after loading, for instance, 50 week data, what is the usage of kdb? Since I overwrite the "TICKDATA" repeatly, I guess the used memory of kdb+ should be on level 20MB+, right? O it will be 20MB*50weeks = 1 GB and crashed the kdb+ as I described in first question?
Thank you very much for your time and have a nice day~~