Hacker Newsnew | past | comments | ask | show | jobs | submit | rubyskills's commentslogin

Was it HFT or just processing large client orders? I'm having a hard time finding this out


It could be the C++ for executing trades and BEAM for the client orders. Obviously you'd want something like C++ there when you're racing. Though if they did the trading as a C++ NIF it could all still run on the BEAM avoiding a memory copy. Interesting thought exercise anyway.


Substring search for the word delve in the intro paragraph is all you need. :)


We already have a push system called Check 21. You scan an image of a check and it will send the money instantly to the bank instead of through the Fed. It created after 9/11, as money couldn't move when the Fed was frozen.


Zelle exists and you can scan a check that was written to you (check 21)


Next business day. How long it takes can also depend on the receiving bank


Yes, and only next day settlement. Because there’s no real time authorization, payments have two business days after settlement for the banks to report ordinary failures like insufficient funds.

How quickly a bank responds in that window depends greatly on the bank. In practice at decent scale, we see banks using every possible hour of that two day window to fail transactions.

An ACH debit made on Friday night technically has until open of business Wednesday to fail.


I do enjoy reading actual dialog coming from Kenny. :)


Reads like a content piece to push pgvector.

This is good feedback. I think this is more of a critique of my writing style than anything. I need to work on this!


I do admit, I am a pretty big fan boy of pgvector (and postgres). I think it's a much better solution than some of these other vector databases that are available. I would like to see it gain more popularity in the AI space.

I've been watching this github issue for awhile and almost didn't post this.

"As we witness a mass migration from Twitter to Threads"

Threads launched yesterday, way too early to state that as fact.

Agreed, though they did just get 50 million users. That is competing with openai adoption speeds!


This isn't quite accurate.

GPT3.5 is 4k tokens and has a 16k version GP4 is 8k and has a 32k version.

You are correct that this needs to account for both input and output. I suspect that when you feed chat gpt longer it prompts, it may try to use the 16k / 32k models when it makes sense.


Yes I'd like the details on this. My experience has been the opposite of you prompt it correctly, or it has the algorithm or data structure trained in its model already.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: