Guy talks about switching to the "Classic" version if
> you just want a simple, open source, local-only JSON-formatting extension that won't receive updates.
Wow that sounds like a tough choice. JSON formatting is moving at such a fast pase that I don't know if I should pay a JSON formatting SaaS a monthly subscription, or if I really can live without updates.
That makes sense, because JWT is base64 encoded, and those base64 tokens are bigger and more expensive. JWT has 3 parts, so it's 3x more expensive, obviously.
Lol. I mean what the hell is this. I have this weird feeling this guy got tricked by an LLM into thinking this move is smart... "what you've built is not just a json formatter, it's the next big...".
I mean good luck to that guy. Everyone should have a shot at turning his free work into something worth it. I think i've been using that extension as well. But yeah, i never cared enough to know if it was this one. But i do hope there are others who did & he can surprise me and turn this user base into customers of a commercial product. If he pulls that of, i'd be truly impressed.
It really is dramatic. The author wrote a very moving paragraph on his hard life as the maintainer of the JSON formatting experience. Someone up top pitched in on the dire state of the "OSS ecosystem".
I just hope the authors of the "Go Back With Backspace" extension (now in version 3.0) I critically rely on ever since Chrome sold out will not betray me. It needs access to all sites, which as someone above mentioned is because of the great design of the new Extension Manifest API thingy.
AI is such a blessing. I use it almost every day at work, and I've spent this evening getting a Bluetooth to USB mapper for a ps4 controller working by having ChatGPT write it for me, for a bigger project I'm working on. Yes, it's going to take some time to fully understand the code and adjust it to my own standards, but i've been playing a game a few hours now and I feel zero latency and plenty of controller rumble that I'm having fun giving some extra power. It pretty much worked with the first 250 lines of C it spew out.
What's gonna be super interesting is that I'm going to have an rpi zero 2 power up my machine when I press the controller's ps-button. That means I might need to solder and do some electrical voodoo that I've never tried. Crossing my fingers that the plan ChatGPT has come up with won't electrocute me.
This is also the university that develops RumbleDB[0]. It uses JSONiq as its query language which is such a pleasure to work with. It's useful for dealing with data lakes, though I've only experimented with it because of JSONiq.
Something to consider when using SQLite as a file format is compression (correct me if I'm wrong!). You might end up with a large file unless you consider this, and can't/won't just gz the entire db. Nothing is compressed by default.
Sure. But if you have reasonably small files just compress the whole file, like MS Office or EPUB files do.
Or if your files are large and composed of lots of blobs, then compress those blobs individually.
Whereas if your files are large and truly database-y made of tabular data like integers and floats and small strings, then compression isn't really very viable. You usually want speed of lookup, which isn't generally compatible with compression.
Please do not use second resolution mtime (cannot represent the high accuracy mtime that modern OSs use, so packing and unpacking , or causes differences eg in rsync), or build anything new using DEFLATE (it is slow and cannot really be made fast).
This seems completely orthogonal? This is an alternative to zip and tar built on SQLite:
> An "SQLite Archive" is a file container similar to a ZIP archive or Tarball but based on an SQLite database.
Your parent comment said that when you're using SQLite as an application format, the content in the database don't get compressed. These two things have nothing to do with each other.
People who have experience with Aurora and RDS Postgres: What's your experience in terms of performance? If you dont need multi A-Z and quick failover, can you achieve better performance with RDS and e.g. gp3 64.000 iops and 3125 throughput (assuming everything else can deliver that and cpu/mem isn't the bottleneck)? Aurora seems to be especially slow for inserts and also quite expensive compared to what I get with RDS when I estimate things in the calculator. And what's the story on read performance for Aurora vs RDS? There's an abundance of benchmarks showing Aurora is better in terms of performance but they leave out so much about their RDS config that I'm having a hard time believing them.
We've seen better results and lower costs in a 1 writer, 1-2 reader setup on Aurora PG 14. The main advantages are 1) you don't re-pay for storage for each instance--you pay for cluster storage instead of per-instance storage & 2) you no longer need to provision IOPs and it provides ~80k IOPs
If you have a PG cluster with 1 writer, 2 readers, 10Ti of storage and 16k provision IOPs (io1/2 has better latency than gp3), you pay for 30Ti and 48k PIOPS without redundancy or 60Ti and 96k PIOPS with multi-AZ.
The same Aurora setup you pay for 10Ti and get multi-AZ for free (assuming the same cluster setup and that you've stuck the instances in different AZs).
I don't want to figure the exact numbers but iirc if you have enough storage--especially io1/2--you can end up saving money and getting better performance. For smaller amounts of storage, the numbers don't necessarily work out.
There's also 2 IO billing modes to be aware of. There's the default pay-per-IO which is really only helpful for extreme spikes and generally low IO usage. The other mode is "provisioned" or "storage optimized" or something where you pay a flat 30% of the instance cost (in addition to the instance cost) for unlimited IO--you can get a lot more IO and end up cheaper in this mode if you had an IO heavy workload before
I'd also say Serverless is almost never worth it. Iirc provisioning instances was ~17% of the cost of serverless. Serverless only works out if you have ~ <4 hours of heavy usage followed by almost all idle. You can add instances fairly quickly and failover for minimal downtime (of course barring running into the bug the article describes...) to handle workload spikes using fixed instance sizes without serverless
Have you benchmarked your load on RDS? [0] says that IOPS on Aurora is vastly different from actual IOPS. We have just one writer instance and mostly write 100's of GB in bulk.
We didn't benchmark--we used APM data in Datadog to compare setups before and after migration
I believe the article is talking about I/O aggregate operations vs I/O average per second. I'm talking strictly about the "average per second" variety. The former is really only relevant for billing in the standard billing mode.
Actually a big motivator for the migration was batch writes (we generate tables in Snowflake, export to S3, then import from S3 using the AWS RDS extension) and Aurora (with ability to handle big spikes) helped us a lot. We'd see application performance (query latency reported by APM) increase a decent amount during these bulk imports and it was much less impactful with Aurora.
iirc it was something like 4-5ms to 10-12ms query latency for some common queries regularly and during import respectively with RDS PG and more like 6-7ms during import on Aurora (mainly because we were exhausting IO during imports before)
For me, the big miss with Postgres Aurora RDS was costs. We had some queries that did a fair amount of I/O in a way that would not normally be a problem, but in the Aurora Postgres RDS world that I/O was crazy expensive. A couple of fuzzy queries blew costs up to over $3,000/month for a database that should have cost maybe $50-$100/month. And this was for a dataset of only about 15 million rows without anything crazy in them.
We were burned by Aurora. Costs, performance, latency, all were poor and affected our product. Having good systems admins on staff, we ended up moving PostgreSQL on-prem.
> There's an abundance of benchmarks showing Aurora is better in terms of performance but they leave out so much about their RDS config that I'm having a hard time believing them.
Aurora doesn't use EBS under the hood. It has no option to choose storage type or io latency. Only a billing choice between pay per io or fixed price io.
Precisely! That's why RDS sounds so interesting. I get a lot more knobs to tweak performance, but I'm curious if a maxed out gp3 with instances that support it is going to fare any better than Aurora.
I've had better results with managing my own clusters on metal instances. You get much better performance with e.g. NVMe drives in a 0+1 raid (~million iops in a pure raid 0 with 7 drives) and I am comfortable running my own instances and clusters. I don't care for the way RDS limits your options on extensions and configuration, and I haven't had a good time with the high availability failovers internally, I'd rather run my own 3 instances in a cluster, 3 clusters in different AZs.
Blatant plug time:
I'm actually working for a company right now ( https://pgdog.dev/ ) that is working on proper sharding and failovers from a connection pooler standpoint. We handle failovers like this by pausing write traffic for up to 60 seconds by default at the connection pooler and swapping which backend instance is getting traffic.
RDS PG stripes multiple gp3 volumes so that's why RDS throughput is higher than gp3
I think 80k IOPs on gp3 is a newer release so presumably AWS hasn't updated RDS from the old max of 64k. iirc it took a while before gp3 and io2 were even available for RDS after they were released as EBS options
Edit: Presumably it takes some time to do testing/optimizations to make sure their RDS config can achieve the same performance as EBS. Sometimes there are limitations with instance generations/types that also impact whether you can hit maximum advertised throughput
Only if you allocate (and pay for) more than 400GB. And if you have high traffic 24/7 beware of "EBS optimized" instances which will fall down to baseline rates after a certain time. I use vantage.sh/rds (not affiliated) to get an overview of the tons of instance details stretched out over several tables in AWS docs.
Proton version will always work better if someone does not show an example and encourage the usage of native support. With Proton you are guaranteed to never reach the optimal potential, or get full advantages of the Linux/Wayland ecosystem. While with native versions you have at least the chance to get in there.
It is like judging someone for taking an advantage of the new CPU instructions that accelerate processing because general instructions are already good enough.
Native doesn't automatically mean better - quite a few examples of games running better on proton than with native executables(and yes then we can start arguing that it just means the native port is done poorly, but I'm just saying don't assume native will always run better).
It seems like a similar argument around the popularity of third party engines, whether studios should use Unreal, or whether they have the expertise/resources to change to and use another engine, or make their own bespoke engine, and if that will produce better results.
I think that is not fair comparison. Proton adds additional layer which can be completely removed and affects the runtime performance. Switching different game engine changes the layer implementation, instead of removing.
When Proton started to get good, there were multiple stories of small game studios just dropping their bespoke Linux builds because the Windows->Proton version ran much much faster and required zero effort from them.
I'm thinking about how to properly test AWS Step Functions. The problem is that I can either mock the entire response for every state in JSON only, or call out to a lambda. What I want is to type check the evaluated JSONPath payload and the mocked JSON response, to ensure that my tests always adheres to global contracts/types written in JSON Schema.
I think it's doable by dynamically creating lambdas based on test cases I define in one way or another, perhaps like mocked integration services, that does nothing but validate if the event from SFN matches a schema, and that the mocked response also matches a schema.
My concern is that I can't find prior projects doing this. My use case is mostly (exclusively at the moment) calling out to lambdas, so perhaps I can get away with this kind of type checking. But it's just weird that something like this doesn't already exist! Past experiences have taught me that if no one have tried it before, my idea is usually not that good.
Let me know what you think!
(Would have liked to use durable execution which totally solves the typing issue, but can't in this case)
> you just want a simple, open source, local-only JSON-formatting extension that won't receive updates.
Wow that sounds like a tough choice. JSON formatting is moving at such a fast pase that I don't know if I should pay a JSON formatting SaaS a monthly subscription, or if I really can live without updates.
reply