Hacker Newsnew | past | comments | ask | show | jobs | submit | mhurd's commentslogin

Cash might have been better thanks to the wonderful world of CCTV that protects us all like a helpful Big Brother


Out of order network speculation, packet slicing, and wire tapping that makes you money ;-)


Yep, that was a rather bad typo. 15,825 us. No excuse, except I shouldn't post in the wee hours of the morning ;-) I'm not really arguing about measured latency but you have to apply a best-case cost as you cannot measure certain parts of hidden infrastructure than you can't touch. You do know 1000 bits on a 1Mbps line will take "at least" 1000 microseconds.

Even your kernel bypass driver on a SolarFlare still takes at least 40us on 4000 bits from a 100Mbps KRX UDP feed in those older days, as that is simply the best case time on the wire.

True on nanoseconds being the common measure today.


some radio stuff here that may or may not be of interest FWIW: https://meanderful.blogspot.com/2017/05/lines-radios-and-cab...



I was hoping to emphasise the crime issues. There are 40M slaves (UN stats) that want their AML back. Kids being abused and photographed probably don't like BTC paying for their servers. Enough is enough really. It is turning into a large problem with pretty disjointed and tepid regulatory responses. The regulators have dropped the ball on this.

There was a BTC-e fine of $110M in mid 2017 that also noted the problematic role of Dash and tumblers. It is hard to image Monero and Zcash are more tolerable to FinCEN.

The bubbles are a worry when 80+ yr olds start asking me about BTC when its price is on the National News each night here. That's a slightly different problem. Ensuring that both currencies and virtual-currencies comply with AML/KYC/CTF/Slavery regs should be at top of mind for most ethical people I'd hope.


Better than 50% discounts for reasonable FPGA volume, true. Still expensive when compared to a V100.

Microsoft is seeing ~90Tops for their MS-FP8 datatypes: https://www.top500.org/news/microsoft-takes-fpga-powered-dee...

It is certainly hard to compete against an NVidia V100 with its tensor ops.

It can get interesting though. There are quite a few graphs that tensorflow cannot place on a GPU, especially with RNNs. An FPGA does give you more flexibility in architecture which was, and would remain one of the motivations for MS to choose them as a platform.


Yeah, it's a bit dense. Sorry about that.


No need to apologize, I agree with the assessment.

In the clear light of day, that comment was just me vomiting words onto the forum in the vain attempt to justify the effort I put into grokking that comment. :/


"The best architecture is no architecture" ... became a popular catchphrase for me in the early nineties. I built a streaming kind of trading system back then that integrated data flow and functional views, as some problems were easier to grok in one and harder in the other. Just thinking about things as simple functions: a = f(b) and code as a way of organising like with like worked for me.

Sometimes the object approach gets a bit silly. Organisation by the first parameter being special doesn't always work too well. For example, GAMS is a better classification for math than simply putting all vector or matrix code in the one classification. Curation is hard at the end of the day.


How much automated/acceptance/regression testing did you guys have?

Would "releases" cause major brick-shitting (like the instance where Mr. L deployed a problematic release)?


It improved from then. We evolved unit tests and full system tests. A release would have to undergo a full simulation to show it was not only problem free but that it still had profitable results that look correct. We had test/debug shims for glibc functions that may cause syscalls, such as time or direct/indirect locale (such as from a printf), that would error on use in code to prevent sneaky costly timing things jumping in.

We did have one rather insidious error where the risk was too strong and preventing some orders going out. It took us a few months to track it down as it was making the system feel clumsy but it was still working. We assumed the market had changed a bit. Cost us a few million and it was a mistake from our best software dev. It didn't change the fact he was our best software dev (he's at Google now). Lesson learnt was to check for the positives and not just the errors.

We introduced a number of risk check evolutions. A separately developed shim was the final check on orders to make sure that regardless of risk the order made sense, e.g. not a zero price. This code was structured to be as independent as possible from code in the main system. That saved our butt a few times. Adding timing throttles was also important to prevent the system reacting in a way that would send a silly number of orders per sec. We also evolved to giving the broker html web pages where they could view the risk in real time and control risk if necessary. This often included integration with broker risk systems and taking their risk files and integrating the constraints to our engines.

At a later firm, we had automated test systems that also ran performance tests with hardware like that which Metamako now provides. The unit tests on code check-in would run not only the unit tests but there was a suite of performance tests where each would reconfigure the network and run things with external performance measurement on the network. This would allow us to track performance bumps of tens of nanoseconds to specific code deltas. Very useful indeed. A slightly customised version of graphite allowed us to see the chart of the performance of all components and tests over time.

Further to this, we evolved to a specific kernel, OS, and bios settings being replicated so that we could reproduce exactly a production system and vice versa. Tuned BIOS and Linux kernels became important.

The risk controls, unit and systems tests are probably the most important things an HFT does. YOLO is very true.


Incredible--was everything written with C++? What was the process like for rolling a new release out to production trading?


The critical stuff was VHDL & C++. Mainly C++. Python & perl around the edges for housekeeping. Can't underestimate the role of bash as there tends to be a lot of scripting of simple components.

Production was getting a branch to pass unit and system tests locally. Then run in an acceptance test environment against the official test exchange. Test exchange wasn't always totally realistic. You'd typically have to mirror some captured production traffic to make it somewhat realistic. Also, some exchanges had slightly different production versus test versions. Trapped us once with some spaces being insignificant in Canada in the spec and the official test exchange but not allowed in production. Uggh. No real way to test for that.

It evolved to linux repo deployment where a yum command would summon the versions & scripts. Convenient for rollbacks too.

Another aspect was testing the ML parameter set. This would typically be updated daily and even though it was not a code change it is like one as it affects behaviour. ML parameter sets would have to pass profit simulation benchmarks to make it to production which was often a challenge in construction and testing on the grid to meet deadlines.


I think it has become more polarised as jitter (as a consequence of latency) has come down. Also more democratised and many vendors provide good solutions.

Still, quite good profits in the system. See: https://meanderful.blogspot.com.au/2018/01/australian-popula...


Could you explain the information edge that asset managers have? I always thought the big guys were the easiest targets.

What kind of algos (you mention that in the article) can they employ to defend or just perform better? I'd love to read more about it, it's very interesting - can you point me towards some resources ? I'm not in finance but this "war" has always fascinated me.


The primary information edge an asset manager has is that they have large enough order that it is not immediately consumable. This will have a price impact. So, if they take the HFT price on offer at a market, the HFT will lose money as the price will go against them.

Most of the algos I'm thinking about there are designed to minimise market impact. They do this by spreading the order over time and/or space (venues) as well as by balancing passivity and aggressiveness.

There is not a terrible lot that is written that is unbiased as most have a particular POV. Market microstructure textbooks are perhaps the healthiest place to start ;-)


Thanks a lot for sharing those bits of knowledge! I'll look into that, have you ever considered writing a book? ;) I added your blog to my long must read!

I love HN for such moments, out of a blue a post on a superinteresting topic AND a knowledgeable person who doesn't mind answering some questions.


Not so talented, just persistent I think. One foot in front of the other helps the journey.

I've taken a year or so off. My aged mother had cancer and subsequent health issues. Doodling with thinking about HFT again, working on some transport IP stuff...


That's correct. Automated Trader requested it to be published under a pseudonym when it when out in 2016. I didn't object and chose the Matrix reference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: