I don't understand why you and the other person are trying to "correct" them? I thought their point was obvious: MS Research innovates, and then links examples (including F# which goes on to be a successful product as you pointed out).
F# being in full production doesn't disprove what they said, it further proves it.
PS - I disagree with the person above's proposition that MS Research is the "only" innovate branch of Microsoft. I am just posting to get clarification on what you and the other poster are getting at.
I want to be able to scale out by adding more machines. I want to be able to failover automatically to another data center when the first one goes down. I have yet to see a straight forward way to accomplish this with PG. Their wiki lists a bunch of tools related to this but they are either abandoned or does not cover this as far as I can tell. I can't understand all the positive I read about PG..
The promotion is done by rebinding the URL of the database and restarting the app. This shares mechanism neatly with changing of passwords, which is one reason we decided it was worth throwing out network-transparent orthodoxy when it came to HA: the clients must be controlled anyway to deal with security considerations.
One approach I used when we migrated data centres to avoid having to manage timing etc. of IP address changes in our apps, was to use haproxy.
We configured slaves in the new data centre, set up a haproxy in the new data centre that used the old data centre databases as the main backend, and the new data centre databases as the backup, changed the apps to point to haproxy, shut down the masters and let haproxy shift all the traffic, promoted the slaves once we were certain they'd caught up. We had a period of seconds where the sites were effectively read-only, but that was it.
We're planning rolling out haproxy with keepalived or ucarp to mediate between a lot more of our backend services
> I want to be able to failover automatically to another data center when the first one goes down. I have yet to see a straight forward way to accomplish this with PG
Of all the issues that could cause a machine to decide that a failover is needed, most of the root causes make a failover actually non desirable (a hardware failure for example (failover is good) is way less likely than non-reachability due to load (failover is disastrous) and unless you are very careful, an automated solution will act the same way in both cases.
Add to that the huge cost of failing back during which time there's no more slave to fail over to: Until 9.4 ist released, failing back requires you to file-system-level copy all the data back to the failed master to bring it back up as a slave.
After 9.4, re-synchronizing an old failed master to the new master will actually be possible in most cases (a mistaken failover is usually covered by these).
In case of an emergency, first make sure that a failover would actually help (if you're down because of high load and a misconfiguration of your system, failing over won't help, but will only make things worse), then fail over manually.
As I said, there are way fewer possible emergencies where failing over would help compared to many, many more where failing over would actually cause more damage.
This is valid for all non master-master database configurations I've had to deal with so far, but, again, it's even more pronounced with postgres because of the very time (and bandwith - which could mean "costly" when you cross the public internet) consuming failback (during which you have nowhere to fail over to again).
If you really, really want to do it, have a look at pgpool (http://www.pgpool.net/) which can automatically fail over to a slave and which also is able to read-load-balance between one or multiple slaves. It's quite the out-of-the-box solution.
As for scaling horizontally at the db level that can be achieved with foreign data wrappers calling out, but isn't built in.
While Postgres doesn't rival oracle and ms SQL server in feature checkboxes it's a very solid DBMS with many advanced SQL features, and it's free and open source. You can do a lot without hitting the scaling problems.
Well, in the world of free software you have a choice - an easily-scaled database system that offers you few guarantees and is very hard to code safely against, or one that is harder to scale but easier to code safely against. That doesn't make PG bad - it's just a choice you make. If you want to pay, you can get a little bit closer to having both.
Of course you hear lines like you quoted all the time - that's because it's true. The overwhelming majority of applications don't have needs beyond a master-slave pair of DB systems - and trends are moving in favour of that direction every day as RAM gets cheaper. Stack Overflow runs on a single not-that-beefy master-slave DB system.
Typical web applications are insanely amenable to read caches, and that's where you do most of your scaling. If you're at the point where your scaling needs truly exceed a large single DB system, I'd hope you have some money to throw at the problem.
Imagine my users table has a unique constraint on email address. Or my hotel_room_reservations table has to guarantee I never store overlapping reservations for the same room. In either case, on insert and on update, I need to check the new or updated row vs a bunch of other rows.
Those scenarios make horizontal scaling not so simple. Sure, get rid of those constraints, and you can insert as fast as your network can ship the bytes. But that's not a database anymore. That's a glorified flat file.
That does not have automatic failover. I really want to use Postgres at work but every time I start reading about failover, load balancing, sharding and so on it seems like such a mess with PG.
I just want sharding with automatic failover to separate datacenter in a simple package...
AFAIK the only solution to this in the MySQL world is MHA. And that seems not that much better than the way Postgres does it. Which Relational databases are you referring to that do have this?
I bought a Samsung E2370 as a hiking phone for 70USD. The usual one-day battery time is no good if you plan to hike for weeks. According to the specs it has 90 day standby. According to the web site its only 65 (not sure why it differs).When I had it idling on my desk I had to charge it after 70 days. I guess this is what you get with a modern bulky battery if you just scale down on features.
Neither is this. Page says it works on all platforms but when I try the code editor on my Windows Phone 8, it's clear that it does not. Page scrolls to weird locations when I just type. I typed Hello and some second later text was scrolled out of visibility. Spent a few minutes just trying to type, but it really does not work. (This was typed on same phone without weird scrolling...)
That is just FUD im my opinion. I havent har any issues with getting my applications working on multiple platforms. Have you har performance issues with C# on IOS?
Be practical. I've had to support wordpress on IIS before, it's always a lot of hassle not using the tech in its native environment. Same with MySQL + .Net. The EF, for example, played really poorly with MySQL.
I can't imagine using mono is for the faint hearted. For example in the new MS programming language thread [1] today someone mentioned they ended up giving up on the .Net GUI controls and using GTK# instead because it was so unreliable. Another mentioned that the transition from one SQL driver to another had left a lot of projects hanging in the wind with SQL breaking on mono but working on Windows.
I don't think that's a fair comparison. Rust doesn't come with a set of functioning, well-tested and cross-platform GUI libraries and database drivers either.
Running C# the language with the BCL on mono is pretty painless, and offers a ton of functionality.