cancel
Showing results for 
Search instead for 
Did you mean: 

RE: [personal kdb+] kdb+tick with schemaless events

david_demner
New Contributor
X-Originating-IP: 174.7.128.233User-Agent: Workspace Webmail 5.14.0Message-Id: <20150429065429.85f80dae80d1d2f2e266ec6278e6cbe8.368683ce6a.wbe@email07.europe.secureserver.net>From: "David Demner \(AquaQ\)" To: personal-kdbplus@googlegroups.comSubject: RE: [personal kdb+] kdb+tick with schemaless eventsDate: Wed, 29 Apr 2015 06:54:29 -0700Mime-Version: 1.0
Forgot to include an example of what the performance impact might be, even when selecting a smaller part of the table:

q)n:1000000;t:`sym xasc([]time:n?0D; sym:n?200; data:n#enlist(1 2!(1 2;1 2)))
q)`:t/ set 0#t;
q)`:t/ upsert t;
q)@[`:t/;`sym;`p#]
q)\ts:100 select time from t where sym=9 /cost of selecting one simple column for one sym
31 66144
q)\ts:100 select time,sym from t where sym=9 /minimal incremental cost to adding a second column
31 131680
q)\ts:100 select time,sym,data from t where sym=9 /massive incremental cost when adding the (not much larger on disk) complex column
55684 168389072


-------- Original Message --------
Subject: RE: [personal kdb+] kdb+tick with schemaless events
From: "David Demner \(AquaQ\)" <david.demner@aquaq.co.uk>
Date: Wed, April 29, 2015 4:30 am
To: personal-kdbplus@googlegroups.com


q)t:([]time:3?0D; sym:til 3; data:3#enlist(1 2!(1 2;1 2)))
q)`:t/ set 0#t
q)`:t/ upsert t
q)value`:t/
time                 sym data
--------------------------------------
0D09:25:33.805802464 0   1 2!(1 2;1 2)
0D12:24:36.672738790 1   1 2!(1 2;1 2)
0D12:23:00.641756951 2   1 2!(1 2;1 2)


kdb+ does this to protect you from yourself when you're trying to write down complex columns that can't be efficiently accessed. 

-------- Original Message --------
Subject: Re: [personal kdb+] kdb+tick with schemaless events
From: <joshmyzie2@yandex.com>
Date: Wed, April 29, 2015 3:41 am
To: personal-kdbplus@googlegroups.com


Thanks for the reply, David.

Regarding performance, I realize I will take a hit, but my thinking was
that I will either only query a small time window / specific event type,
or I would split out a specific event type to a standard schema table.

Maybe I'm misunderstanding you, but how would I save my events (nested
dicts) to a hdb without serializing? For example, the following table
won't save unless I serialize the data column:

q)t:([]time:3?0D; sym:til 3; data:3#enlist(1 2!(1 2;1 2)))
q)t
time sym data
--------------------------------------
0D05:44:29.828280061 0 1 2!(1 2;1 2)
0D03:37:10.269978940 1 1 2!(1 2;1 2)
0D03:45:41.618905216 2 1 2!(1 2;1 2)
q)`:/tmp/t/ set t
k){$[@x;.[x;();:;y];-19!((,y),x)]}
'type
q.q))\
q)`:/tmp/t/ set update -8!'data from t
`:/tmp/t/


Josh


On 28 April 2015 20:24 UTC, David Demner (AquaQ) <david.demner@aquaq.co.uk> wrote:

> 1. I think your performance will be pretty bad especially if you have lots of events. This is especially true if you have longer hdb queries because the eventData column can't be randomly accessed.
>
> If each event type has the same schema, it may be better to split each one into a separate table (in your upd event). If your schema can change over time, have a look at dbmaint.q for HDB schema maintenance (or perhaps you won't need it since kdb+ reads the schema from the latest partition in your hdb)
>
> That being said, it's certainly possible if you're willing to pay the price.
>
> 2. I think JSON would just bloat it further for not much (no?) benefit. I don't think you need to serialize (just set the empty table then upsert the results possibly with .z.zd or manual compression) and in fact maybe serialization would slow it down further
>
> 3. I don't know much about tick.q or r.q. But it's likely pointless to serialize before (kdb is very clever about serializing where necessary)
>
> -----Original Message-----
> From: personal-kdbplus@googlegroups.com [mailto:personal-kdbplus@googlegroups.com] On Behalf Of joshmyzie2@yandex.com
> Sent: Tuesday, April 28, 2015 8:29 AM
> To: personal-kdbplus@googlegroups.com
> Subject: [personal kdb+] kdb+tick with schemaless events
>
>
> Hello,
>
> I am writing an event-driven application and I want to send all events to kdb for persistence and running real-time ad-hoc queries. Rather than hard-code the schema for all my events, which will change over time, I am thinking of sending a single table to my ticker plant:
>
> ([] time:`timespan$(); sym:`g#`symbol(); eventData:())
>
> where "sym" will be the event name and eventData can be any dict.
> Example table with two event types:
>
> time sym eventData
> ---------------------------------------------------------------------------
> 0D11:14:57.333000000 e1 `xx`yy!1 2
> 0D11:14:57.333000000 e2 `aa`bb`cc!(5;0.3927524 0.5170911 0.5159796;`a`b`c)
> 0D11:14:57.333000000 e1 `xx`yy!5 2
>
>
> My questions are:
>
> 1. Is this strategy with kdb a terrible idea?
>
> 2. How should I serialize the eventData for EOD persistence? Just "-8!"? Any reason to use JSON instead?
>
> 3. Should I instead serialize the eventData BEFORE sending it to my ticker plant, so that I don't need to modify tick.q or r.q?
>
> Thanks,
> Josh
>
> --
> You received this message because you are subscribed to the Google Groups "Kdb+ Personal Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to personal-kdbplus+unsubscribe@googlegroups.com.
> To post to this group, send email to personal-kdbplus@googlegroups.com.
> Visit this group at http://groups.google.com/group/personal-kdbplus.
> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Kdb+ Personal Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to personal-kdbplus+unsubscribe@googlegroups.com.
To post to this group, send email to personal-kdbplus@googlegroups.com.
Visit this group at http://groups.google.com/group/personal-kdbplus.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Kdb+ Personal Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to personal-kdbplus+unsubscribe@googlegroups.com.
To post to this group, send email to personal-kdbplus@googlegroups.com.
Visit this group at http://groups.google.com/group/personal-kdbplus.
For more options, visit https://groups.google.com/d/optout.
2 REPLIES 2

joshmyzie2
New Contributor

Ah, you're right. �It reads the whole table into memory when selecting on the complex column.

I suppose this is a reason to just serialize the complex column manually with -8!'. �I modified your benchmark and can now get efficient random access to all columns. �I wonder why kdb doesn't do this automatically?

q)n:1000000;t:`sym xasc([]time:n?0D; sym:n?200; data:n#enlist(1 2!(1 2;1 2)))
q)`:t/ set 0#t;
q)`:t/ upsert t;
q)@[`:t/;`sym;`p#];
q)\l .
q)\ts:100 select time,sym from t where sym=9
9 131776
q)\ts:100 select time,sym,data from t where sym=9
53227 172583568

q)n:1000000;t:update -8!'data from `sym xasc([]time:n?0D; sym:n?200; data:n#enlist(1 2!(1 2;1 2)))
q)`:t/ set t;
q)\l .
q)\ts:100 select time,sym from t where sym=9
10 131776
q)\ts:100 select time,sym,-9!'data from t where sym=9
201 1682656




29.04.2015, 16:57, "David Demner (AquaQ)" :
> �Forgot to include an example of what the performance impact might be, even when selecting a smaller part of the table:
>
> �q)n:1000000;t:`sym xasc([]time:n?0D; sym:n?200; data:n#enlist(1 2!(1 2;1 2)))
> �q)`:t/ set 0#t;
> �q)`:t/ upsert t;
> �q)@[`:t/;`sym;`p#]
> �q)\ts:100 select time from t where sym=9 /cost of selecting one simple column for one sym
> �31 66144
> �q)\ts:100 select time,sym from t where sym=9 /minimal incremental cost to adding a second column
> �31 131680
> �q)\ts:100 select time,sym,data from t where sym=9 /massive incremental cost when adding the (not much larger on disk) complex column
> �55684 168389072
>> �-------- Original Message --------
>> �Subject: RE: [personal kdb+] kdb+tick with schemaless events
>> �From: "David Demner \(AquaQ\)"
>> �Date: Wed, April 29, 2015 4:30 am
>> �To: personal-kdbplus@googlegroups.com
>>
>> �q)t:([]time:3?0D; sym:til 3; data:3#enlist(1 2!(1 2;1 2)))
>> �q)`:t/ set 0#t
>> �q)`:t/ upsert t
>> �q)value`:t/
>> �time � � � � � � � � sym data
>> �--------------------------------------
>> �0D09:25:33.805802464 0 � 1 2!(1 2;1 2)
>> �0D12:24:36.672738790 1 � 1 2!(1 2;1 2)
>> �0D12:23:00.641756951 2 � 1 2!(1 2;1 2)
>>
>> �kdb+ does this to protect you from yourself when you're trying to write down complex columns that can't be efficiently accessed.
>>> �-------- Original Message --------
>>> �Subject: Re: [personal kdb+] kdb+tick with schemaless events
>>> �From:
>>> �Date: Wed, April 29, 2015 3:41 am
>>> �To: personal-kdbplus@googlegroups.com
>>>
>>> �Thanks for the reply, David.
>>>
>>> �Regarding performance, I realize I will take a hit, but my thinking was
>>> �that I will either only query a small time window / specific event type,
>>> �or I would split out a specific event type to a standard schema table.
>>>
>>> �Maybe I'm misunderstanding you, but how would I save my events (nested
>>> �dicts) to a hdb without serializing? For example, the following table
>>> �won't save unless I serialize the data column:
>>>
>>> �q)t:([]time:3?0D; sym:til 3; data:3#enlist(1 2!(1 2;1 2)))
>>> �q)t
>>> �time sym data
>>> �--------------------------------------
>>> �0D05:44:29.828280061 0 1 2!(1 2;1 2)
>>> �0D03:37:10.269978940 1 1 2!(1 2;1 2)
>>> �0D03:45:41.618905216 2 1 2!(1 2;1 2)
>>> �q)`:/tmp/t/ set t
>>> �k){$[@x;.[x;();:;y];-19!((,y),x)]}
>>> �'type
>>> �q.q))\
>>> �q)`:/tmp/t/ set update -8!'data from t
>>> �`:/tmp/t/
>>>
>>> �Josh
>>>
>>> �On 28 April 2015 20:24 UTC, David Demner (AquaQ) wrote:
>>>> �1. I think your performance will be pretty bad especially if you have lots of events. This is especially true if you have longer hdb queries because the eventData column can't be randomly accessed.
>>>>
>>>> �If each event type has the same schema, it may be better to split each one into a separate table (in your upd event). If your schema can change over time, have a look at dbmaint.q for HDB schema maintenance (or perhaps you won't need it since kdb+ reads the schema from the latest partition in your hdb)
>>>>
>>>> �That being said, it's certainly possible if you're willing to pay the price.
>>>>
>>>> �2. I think JSON would just bloat it further for not much (no?) benefit. I don't think you need to serialize (just set the empty table then upsert the results possibly with .z.zd or manual compression) and in fact maybe serialization would slow it down further
>>>>
>>>> �3. I don't know much about tick.q or r.q. But it's likely pointless to serialize before (kdb is very clever about serializing where necessary)
>>>>
>>>> �-----Original Message-----
>>>> �From: personal-kdbplus@googlegroups.com [mailto:personal-kdbplus@googlegroups.com] On Behalf Of joshmyzie2@yandex.com
>>>> �Sent: Tuesday, April 28, 2015 8:29 AM
>>>> �To: personal-kdbplus@googlegroups.com
>>>> �Subject: [personal kdb+] kdb+tick with schemaless events
>>>>
>>>> �Hello,
>>>>
>>>> �I am writing an event-driven application and I want to send all events to kdb for persistence and running real-time ad-hoc queries. Rather than hard-code the schema for all my events, which will change over time, I am thinking of sending a single table to my ticker plant:
>>>>
>>>> �([] time:`timespan$(); sym:`g#`symbol(); eventData:())
>>>>
>>>> �where "sym" will be the event name and eventData can be any dict.
>>>> �Example table with two event types:
>>>>
>>>> �time sym eventData
>>>> �---------------------------------------------------------------------------
>>>> �0D11:14:57.333000000 e1 `xx`yy!1 2
>>>> �0D11:14:57.333000000 e2 `aa`bb`cc!(5;0.3927524 0.5170911 0.5159796;`a`b`c)
>>>> �0D11:14:57.333000000 e1 `xx`yy!5 2
>>>>
>>>> �My questions are:
>>>>
>>>> �1. Is this strategy with kdb a terrible idea?
>>>>
>>>> �2. How should I serialize the eventData for EOD persistence? Just "-8!"? Any reason to use JSON instead?
>>>>
>>>> �3. Should I instead serialize the eventData BEFORE sending it to my ticker plant, so that I don't need to modify tick.q or r.q?
>>>>
>>>> �Thanks,
>>>> �Josh
>>>>
>>>> �--
>>>> �You received this message because you are subscribed to the Google Groups "Kdb+ Personal Developers" group.
>>>> �To unsubscribe from this group and stop receiving emails from it, send an email to personal-kdbplus+unsubscribe@googlegroups.com.
>>>> �To post to this group, send email to personal-kdbplus@googlegroups.com.
>>>> �Visit this group at http://groups.google.com/group/personal-kdbplus.
>>>> �For more options, visit https://groups.google.com/d/optout.
>>> �--
>>> �You received this message because you are subscribed to the Google Groups "Kdb+ Personal Developers" group.
>>> �To unsubscribe from this group and stop receiving emails from it, send an email to personal-kdbplus+unsubscribe@googlegroups.com.
>>> �To post to this group, send email to personal-kdbplus@googlegroups.com.
>>> �Visit this group at http://groups.google.com/group/personal-kdbplus.
>>> �For more options, visit https://groups.google.com/d/optout.
>> �--
>> �You received this message because you are subscribed to the Google Groups "Kdb+ Personal Developers" group.
>> �To unsubscribe from this group and stop receiving emails from it, send an email to personal-kdbplus+unsubscribe@googlegroups.com.
>> �To post to this group, send email to personal-kdbplus@googlegroups.com.
>> �Visit this group at http://groups.google.com/group/personal-kdbplus.
>> �For more options, visit https://groups.google.com/d/optout.
> �--
> �You received this message because you are subscribed to the Google Groups "Kdb+ Personal Developers" group.
> �To unsubscribe from this group and stop receiving emails from it, send an email to personal-kdbplus+unsubscribe@googlegroups.com.
> �To post to this group, send email to personal-kdbplus@googlegroups.com.
> �Visit this group at http://groups.google.com/group/personal-kdbplus.
> �For more options, visit https://groups.google.com/d/optout.

charset="utf-8"

X-Mailer: Microsoft Outlook 15.0
Thread-Index: AQFw4f2vUdJMynxoddcxdOeWTMBQEgGoV2+SnhZ7RfA=
Content-Language: en-us

Hm, yeah... The difference is the serialized column is a nested binary =
list (that kdb+ can access randomly) vs a complex object that kdb+ has =
to do a full column scan on.


q)n:1000000;t:`sym xasc([]time:n?0D; sym:n?200; data:n#enlist(1 2!(1 2;1 =
2))); `:t/ set 0#t;`:t/ upsert t; @[`:t/;`sym;`p#]
t -> complex object
04/29/2015 02:13 PM 22 .d
04/29/2015 02:13 PM 73,000,008 data
04/29/2015 02:13 PM 8,007,800 sym
04/29/2015 02:13 PM 8,000,016 time
4 File(s) 89,007,846 bytes

q)n:1000000;t:`sym xasc([]time:n?0D; sym:n?200; data:-8!'n#enlist(1 2!(1 =
2;1 2))); `:t2/ set 0#t;`:t2/ upsert t; @[`:t2/;`sym;`p#]
t2 -> nested list (note the data# file)
04/29/2015 02:13 PM 22 .d
04/29/2015 02:13 PM 8,000,016 data
04/29/2015 02:13 PM 81,000,000 data#
04/29/2015 02:13 PM 8,007,800 sym
04/29/2015 02:13 PM 8,000,016 time
5 File(s) 105,007,854 bytes



> I wonder why kdb doesn't do this automatically?
The serialized version consumes 16 more bytes per table row maybe that's =
why kx wouldn't want to do it automatically? Other than that, though, it =
looks like the gains are pretty substantial.

-----Original Message-----
From: personal-kdbplus@googlegroups.com =
[mailto:personal-kdbplus@googlegroups.com] On Behalf Of Josh Myzie
Sent: Wednesday, April 29, 2015 9:21 AM
To: personal-kdbplus@googlegroups.com
Subject: Re: [personal kdb+] kdb+tick with schemaless events

Ah, you're right. It reads the whole table into memory when selecting =
on the complex column.

I suppose this is a reason to just serialize the complex column manually =
with -8!'. I modified your benchmark and can now get efficient random =
access to all columns. I wonder why kdb doesn't do this automatically?

q)n:1000000;t:`sym xasc([]time:n?0D; sym:n?200; data:n#enlist(1 2!(1 2;1 =
2))) q)`:t/ set 0#t; q)`:t/ upsert t; q)@[`:t/;`sym;`p#]; q)\l .
q)\ts:100 select time,sym from t where sym=3D9
9 131776
q)\ts:100 select time,sym,data from t where sym=3D9
53227 172583568

q)n:1000000;t:update -8!'data from `sym xasc([]time:n?0D; sym:n?200; =
data:n#enlist(1 2!(1 2;1 2))) q)`:t/ set t; q)\l .
q)\ts:100 select time,sym from t where sym=3D9
10 131776
q)\ts:100 select time,sym,-9!'data from t where sym=3D9
201 1682656




29.04.2015, 16:57, "David Demner (AquaQ)" :
> Forgot to include an example of what the performance impact might be, =
even when selecting a smaller part of the table:
>
> q)n:1000000;t:`sym xasc([]time:n?0D; sym:n?200; data:n#enlist(1 2!(1=20
> 2;1 2)))
> q)`:t/ set 0#t;
> q)`:t/ upsert t;
> q)@[`:t/;`sym;`p#]
> q)\ts:100 select time from t where sym=3D9 /cost of selecting one=20
> simple column for one sym
> 31 66144
> q)\ts:100 select time,sym from t where sym=3D9 /minimal incremental=20
> cost to adding a second column
> 31 131680
> q)\ts:100 select time,sym,data from t where sym=3D9 /massive=20
> incremental cost when adding the (not much larger on disk) complex=20
> column
> 55684 168389072
>> -------- Original Message --------
>> Subject: RE: [personal kdb+] kdb+tick with schemaless events
>> From: "David Demner \(AquaQ\)"
>> Date: Wed, April 29, 2015 4:30 am
>> To: personal-kdbplus@googlegroups.com
>>
>> q)t:([]time:3?0D; sym:til 3; data:3#enlist(1 2!(1 2;1 2)))
>> q)`:t/ set 0#t
>> q)`:t/ upsert t
>> q)value`:t/
>> time sym data
>> --------------------------------------
>> 0D09:25:33.805802464 0 1 2!(1 2;1 2)
>> 0D12:24:36.672738790 1 1 2!(1 2;1 2)
>> 0D12:23:00.641756951 2 1 2!(1 2;1 2)
>>
>> kdb+ does this to protect you from yourself when you're trying to =
write down complex columns that can't be efficiently accessed.
>>> -------- Original Message --------
>>> Subject: Re: [personal kdb+] kdb+tick with schemaless events
>>> From:
>>> Date: Wed, April 29, 2015 3:41 am
>>> To: personal-kdbplus@googlegroups.com
>>>
>>> Thanks for the reply, David.
>>>
>>> Regarding performance, I realize I will take a hit, but my thinking =

>>> was
>>> that I will either only query a small time window / specific event=20
>>> type,
>>> or I would split out a specific event type to a standard schema =
table.
>>>
>>> Maybe I'm misunderstanding you, but how would I save my events=20
>>> (nested
>>> dicts) to a hdb without serializing? For example, the following=20
>>> table
>>> won't save unless I serialize the data column:
>>>
>>> q)t:([]time:3?0D; sym:til 3; data:3#enlist(1 2!(1 2;1 2)))
>>> q)t
>>> time sym data
>>> --------------------------------------
>>> 0D05:44:29.828280061 0 1 2!(1 2;1 2)
>>> 0D03:37:10.269978940 1 1 2!(1 2;1 2)
>>> 0D03:45:41.618905216 2 1 2!(1 2;1 2)
>>> q)`:/tmp/t/ set t
>>> k){$[@x;.[x;();:;y];-19!((,y),x)]}
>>> 'type
>>> q.q))\
>>> q)`:/tmp/t/ set update -8!'data from t
>>> `:/tmp/t/
>>>
>>> Josh
>>>
>>> On 28 April 2015 20:24 UTC, David Demner (AquaQ) =
wrote:
>>>> 1. I think your performance will be pretty bad especially if you =
have lots of events. This is especially true if you have longer hdb =
queries because the eventData column can't be randomly accessed.
>>>>
>>>> If each event type has the same schema, it may be better to split=20
>>>> each one into a separate table (in your upd event). If your schema=20
>>>> can change over time, have a look at dbmaint.q for HDB schema=20
>>>> maintenance (or perhaps you won't need it since kdb+ reads the=20
>>>> schema from the latest partition in your hdb)
>>>>
>>>> That being said, it's certainly possible if you're willing to pay =
the price.
>>>>
>>>> 2. I think JSON would just bloat it further for not much (no?)=20
>>>> benefit. I don't think you need to serialize (just set the empty=20
>>>> table then upsert the results possibly with .z.zd or manual=20
>>>> compression) and in fact maybe serialization would slow it down=20
>>>> further
>>>>
>>>> 3. I don't know much about tick.q or r.q. But it's likely=20
>>>> pointless to serialize before (kdb is very clever about serializing =

>>>> where necessary)
>>>>
>>>> -----Original Message-----
>>>> From: personal-kdbplus@googlegroups.com=20
>>>> [mailto:personal-kdbplus@googlegroups.com] On Behalf Of=20
>>>> joshmyzie2@yandex.com
>>>> Sent: Tuesday, April 28, 2015 8:29 AM
>>>> To: personal-kdbplus@googlegroups.com
>>>> Subject: [personal kdb+] kdb+tick with schemaless events
>>>>
>>>> Hello,
>>>>
>>>> I am writing an event-driven application and I want to send all =
events to kdb for persistence and running real-time ad-hoc queries. =
Rather than hard-code the schema for all my events, which will change =
over time, I am thinking of sending a single table to my ticker plant:
>>>>
>>>> ([] time:`timespan$(); sym:`g#`symbol(); eventData:())
>>>>
>>>> where "sym" will be the event name and eventData can be any dict.
>>>> Example table with two event types:
>>>>
>>>> time sym eventData
>>>> =20
>>>> -------------------------------------------------------------------
>>>> --------
>>>> 0D11:14:57.333000000 e1 `xx`yy!1 2
>>>> 0D11:14:57.333000000 e2 `aa`bb`cc!(5;0.3927524 0.5170911=20
>>>> 0.5159796;`a`b`c)
>>>> 0D11:14:57.333000000 e1 `xx`yy!5 2
>>>>
>>>> My questions are:
>>>>
>>>> 1. Is this strategy with kdb a terrible idea?
>>>>
>>>> 2. How should I serialize the eventData for EOD persistence? Just =
"-8!"? Any reason to use JSON instead?
>>>>
>>>> 3. Should I instead serialize the eventData BEFORE sending it to =
my ticker plant, so that I don't need to modify tick.q or r.q?
>>>>
>>>> Thanks,
>>>> Josh
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google =
Groups "Kdb+ Personal Developers" group.
>>>> To unsubscribe from this group and stop receiving emails from it, =
send an email to personal-kdbplus+unsubscribe@googlegroups.com.
>>>> To post to this group, send email to =
personal-kdbplus@googlegroups.com.
>>>> Visit this group at =
http://groups.google.com/group/personal-kdbplus.
>>>> For more options, visit https://groups.google.com/d/optout.
>>> --
>>> You received this message because you are subscribed to the Google =
Groups "Kdb+ Personal Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, =
send an email to personal-kdbplus+unsubscribe@googlegroups.com.
>>> To post to this group, send email to =
personal-kdbplus@googlegroups.com.
>>> Visit this group at =
http://groups.google.com/group/personal-kdbplus.
>>> For more options, visit https://groups.google.com/d/optout.
>> --
>> You received this message because you are subscribed to the Google =
Groups "Kdb+ Personal Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, =
send an email to personal-kdbplus+unsubscribe@googlegroups.com.
>> To post to this group, send email to =
personal-kdbplus@googlegroups.com.
>> Visit this group at http://groups.google.com/group/personal-kdbplus.
>> For more options, visit https://groups.google.com/d/optout.
> --
> You received this message because you are subscribed to the Google =
Groups "Kdb+ Personal Developers" group.
> To unsubscribe from this group and stop receiving emails from it, =
send an email to personal-kdbplus+unsubscribe@googlegroups.com.
> To post to this group, send email to =
personal-kdbplus@googlegroups.com.
> Visit this group at http://groups.google.com/group/personal-kdbplus.
> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google =
Groups "Kdb+ Personal Developers" group.
To unsubscribe from this group and stop receiving emails from it, send =
an email to personal-kdbplus+unsubscribe@googlegroups.com.
To post to this group, send email to personal-kdbplus@googlegroups.com.
Visit this group at http://groups.google.com/group/personal-kdbplus.
For more options, visit https://groups.google.com/d/optout.