On the quadrupling of the estimate from $250 million to $1 billion between 1995 and 1996, the article states:
"The cost increase was the result of detailed engineering studies conducted during the year or so after the initial estimate was released. Among other things, soil testing in the Bay had revealed that bridge pilings would need to be anchored “deeper into bedrock than expected,” she writes."
Now hindsight is 20/20 and I am not an engineer in this field, but it seems that if you're floating an estimate that isn't informed by the engineering studies necessary to give an accurate estimate then you probably shouldn't have given that initial estimate in the first place? Or at least should have given the initial estimate as a range and/or with a huge disclaimer that you might get into researching the bridge and the estimate could cost multiples more?
There does seem to be a "center of gravity" in SF that is incredibly bizarre to me. For a lot of people, if you live west of Divis or south of 24th st, you might as well be, for all intents and purposes, in Oakland. Seems like a lot of the complacent crowd of SF lives in the Mission/SOMA/Pacific Height/Marina/Russian Hill bubble and never gets out of it.
Yet there is a whole 3/4 of the city that isn't encompassed by that part that is really, really awesome. It may not be "trendy" or anything, but they're full of great neighborhood places and "real" people. I think it's easy for everyone analyzing this situation to forget those places exist.
This comes back to transportation a bit. Living in the outer sunset or outer richmond and commuting to soma or the financial district is either a really long bus ride on packed buses or an N/T/etc fraught with delays getting onto and down market street. It takes me 35 minutes to get from my house in north oakland to work off the embarcadero station. When my wife and I lived in the inner richmond (6th and fulton) it took her 55 minutes to get to her office near 16th and bryant.
I think transit here generally gets a bad rap (my wife and I used the muni basically without issue for 5 years living in the city and loved it), but when considering where to live the commute from (and the colder weather of) the western neighborhoods is a big deal.
Yeah that's totally fair. I live at Geary and Stanyan, and made liberal use of the 38L and 38[A,B]X busses, which cut that time down a lot. I used to go to the gym every morning before work and could be door to door on the 38BX in about 25 minutes most days. If you can get a seat on the busses, I kinda enjoyed the time on my phone or with a book, but that's not always possible too.
I commute by bike now, which is certainly the fastest way to get around the city - perhaps just not the safest or least-sweaty way.
Yup, public transit in areas like Outer Sunset is pretty bad. I've never lived there, but I did stay in a hacker house there once while searching for an apt, and it takes a good 40 minutes on a crowded bus to get to Market St. Also I rarely ever find an Uber or Lyft nearby.
Automatic fail fast based on monitoring sounds like an awesome alternative to having latency cascade through the system and then scrambling to "fix" the problem by reducing client timeouts.
I agree, though my first thought is many things work better if you add a little randomness to the equation. Something like abort X% of the time where X is based on latency over an acceptable threshold which should find a happy medium where a resource is at max capacity, but not overloaded.
Of course this is all assuming you have some sort of useful redundancy.
That's an interesting thought and probably something that would make a good addition to the library.
It could be part of the default library or perhaps a custom strategy for the circuit breaker (once I finish abstracting it so it can be customized via a plugin: https://github.com/Netflix/Hystrix/issues/9).
At the scale Netflix clusters operate they basically get this randomness already because circuits open/close independently on each server (no cluster state or decision making).
Thus the cluster naturally levels out to how much traffic can be hitting the degraded backend as circuits flip open/closed in a rolling manner across the instances.
Also, doing this makes sense even when a dependency doesn't have a useful redundancy and must fail fast and return an error.
It is far better to fail fast and let the end client (such as a browser, iPad, PS3, XBox etc) retry and hopefully get to the 2/3s that are still able to respond rather than let the system queue (or going into server-side retry loops and DDOS the backend) and fail and not let anything through.
We prefer obviously to have valid fallbacks but many don't and in those cases that is what we do - fail fast (timeout, reject, short-circuit) on instances where it can't serve the request and let the clients retry which in a large cluster of hundreds of instances almost always get a different route through the instances of the API and backend dependencies.
> you're just given some basic data structures that handle pretty much everything under the sun, and you go from there
This only works because the size of your n is small, possibly a few hundred, so it doesn't matter. When you start dealing with millions or billions of records this stuff matters. Quite a lot.
So really, it's not the language, it's the size of your data - or the size of n that matters.
Exactly, and how many web apps deal with millions of data points? Not many, as far as the view layer is concerned. Perhaps you'll have millions of rows in your DB, but you typically won't process all of those, at once, within PHP or JS. At least in my experience, most data processing on that scale happens in your OLAP layer (and thus is fully removed from the jurisdiction of PHP and JS).
Especially given single-page apps, you should never be dealing with millions of objects; with pagination and such, it's usually under a 1000 at a time, more typically 100 or so.
I'm not sure if you can count this as a startup, but I bought my King-sized Sleep Innovations (memory foam) mattress with Amazon Prime for $530. And if I wanted the less-think 10" one, it would be $400.
The damn thing was ~100lbs in a giant box, and I got it shipped to me free. It's super comfortable and well worth the money - remember you sleep for like 25% of your life.
Mattresses seem like less of a specialty-item than eye glasses, so I wonder if big online retailers like Amazon can just cut out the middle man and service 80-90% of customers?
The problem is that many people like to lie on a mattress before they buy it. Sure, that's a little silly since lying on it for a minute or so is unlikely to capture the experience of tossing and turning on it for a night, but it's clear that it's a competitive advantage to let the consumer compare a few models in a physically direct way.
Warby Parker can get away with the similar consumer requirement for glasses because glasses are small, light and easy to ship. They can send you half a dozen samples and let you pick the one you want. Not so easy with mattresses!
The car market has this problem as well (in addition to others, like the protectionist rackets that the dealerships have set up).
> The problem is that many people like to lie on a mattress before they buy it.
One can get all of the benefit of this by selling a single mattress, but also sending 3 or 4 foam mattress toppers to choose from. Correctly designed packaging would let the customer re-roll the topper, then use a vacuum pump to collapse the rolled topper back into a compact form for return mailing. (The pumps would be cheap and disposable, so wouldn't be returned.)
Using a system like this, one could become the Zappos of mattresses. There would still be a restocking and return fee for the mattress, but one could let the customers exchange and try toppers to their heart's content, so long as they took good care of the merchandise.
With a memory foam mattress, they tend to be vacuum packed in a plastic sleeve and box that is nigh-impossible to ever fit it back in again. I suppose if you had a specialized team that could come to your home, repackage it, and take it away, it might work, but that doesn't sound like it would scale very well outside of a large city.
My wife and I almost got a Sleep Innovations mattress, but in the end we felt that we couldn't buy a mattress that we hadn't had a chance to try out. We both like a very firm mattress, which can be difficult to find.
While it's true that lying on a mattress for a few minutes won't tell you whether it's the best one or not, it sure did help us rule out many mattresses. We ended up getting an Ikea mattress, which turned out to be quite nice.
Ikea also seems to be the only place in the UK that sells decent slatted frames to put under your mattress. I guess British people aren't familiar with the concept.
Ah yes, we previously had a box spring but opted for the slatted frame. I like it a lot, and I like having the bed lower to the ground (although the best my back ever was when I slept on tatami for a semester in Japan)
I bought one from Costco, and didn't like it. They gave me a refund, and the choice to either donate it to charity or have them come pick it up and take it to the dump.
I got my last mattress from Silver State Industries (nevada prison manufacturing) for under $500 (Cal King). It is more comfortable than the Serta that I had originally spent around $1,500 on.
You might want to actually know what you are talking about before commenting. Inmates earn an hourly wage (not great), training on how to do the work, etc. The money they earn gets split up in a few different ways such as going to restitution if applicable and some is available for the inmates to spend at the canteen or save for when they get out.
Care to provide the source for the $0.13/hr? Inmates are paid as little as $1/hr and as much as minimum wage with the lowest hourly rate going to the inmate firefighters due to how the budget is set by the forestry service. The last time I saw the $0.13/hr number trotted out it was also skewed by jobs that paid per piece and not per hour. Inmates that have to pay restitution pay about 5% of their wages, and each working inmate pays deductions for room and board as dictated by the state legislature. I've also seen those with an agenda perform hourly wage calculations on the inmate pay _after_ deductions. If you've got a reputable source that can show otherwise I'd love to see it. I'll also note that I'm related to a recently retired department of corrections officer and have used other prison industries services and am familiar with how it works and what the wages are from dealing with the services and the personell running parts of prison industries in Nevada.
Django still powers a lot of the DISQUS infrastructure. That said, we're in the process of breaking certain parts of it off into independent services and out of Django. But Django will be the main component for the foreseeable future.
Don't misread what Fluxx is saying. The core of disqus and our primary data flows are still powered by Django and that won't change. New servers which are less complex (e.g. the realtime system) are generally written on top of Flask.
While I don't disagree that Apple's technical skill isn't best of breed, where Apple has shined - both now and in the past - is their ruthless determination and focus on UX and HCI. Apple goes further than any company on the planet to make technology devices (computers, laptops, music players, tablets, etc) that delight their users and just work and make them happier and more productive.
Apple doesn't have "cheesy blah inside" stickers because that doesn't delight users and make the product better to use. They're stupid. Apple also doesn't chase fads because fads are just that, "a fad" and rarely do fads have long lasting staying power like a good product should.
I'm not sure that Apple's UX is that much better than Windows7/Gnome/Unity. They have done a very good job of making it easy for you to buy from them with the integration of iTunes but a single button mouse and a single menubar at the top doesn't necessarily make every app easier to use.
Where they do shine is in build quality and user experience which comes from owning the entire product - HW/OS/sales channel/support - and having enough margin to do it well. That's the difference between them and an equally specced Sony laptop running Windows.
It's the profit margin that really makes them special. Sony used to make products of this design and build quality but then to compete had to cut costs and so quality and had to accept the bloatware and stickers. Apple's brilliance has been in managing the process so that they can cut production costs while increasing quality and adding more stuff.
I give a huge amount of credit to Cook for this. Jobs demanding rounded corners on dialogs or sticking to a single button mouse whatever focus groups said was good technical leadership, Ive's produce design is great. But dominating the manufacture and supply network to the extent that Apple have done and with the effectiveness they have done is a major achievement and is not easy.
Look at Boeing having to delay the 7E7 because it couldn't get rivets - while Apple has 747 freighters booked ready to fly new products straight to the store the day they are released.
>I'm not sure that Apple's UX is that much better than Windows7/Gnome/Unity.
... this is insanity. These UXs you refer to are Apple copies that were released years after the Apple UX. Windows7 has a nice UX because Apple forced it to. People were abandoning Windows for OSX, so Microsoft invested in their UX.
Your comment is like saying, "Henry Ford's Model T was no big deal. It's hardly even better than my 1990 Honda Civic." No shit!!!!
That's a non-sequitor and you have now changed the goalposts of the discussion. The discussion is about what got Apple its marketshare in the first place.
I'm a huge fan of office "suites" - smallish offices which comfortably hold 3-6 people. It's a good sweet spot between open spaces full of dozens of people and individual offices. Usually suite-mates are people on the same team/projet as you, so you still get that open collaboration, but aren't distracted (or distract) the other people in the office whose daily work is unrelated to yours.
OP explained things improperly, then. It's an important detail, because Rails currently defaults to single-threaded. This makes an operation such as fetching and returning a twitter feed in a single request (assuming it takes hundreds of milliseconds to get a response from twitter's servers) expensive.
Only rookies do that inside the web server process.
The job of fetching a Twitter feed can be offloaded to a background jobs queue. With a little help from Nginx, you can free the Ruby process to take care of other requests until the response of that Twitter feed is ready.
Or you could simply deploy your Rails app on top of a Java server, by means of JRuby and forward that request to a servlet that uses the continuations support in EE 6, offloading the request to an Akka actor and freeing the pipeline until it is ready. Works great and you can even write everything in Ruby ;-)
Ruby has no support for concurrency, no matter how many threads your interpreter is using. 1.8 had no OS threads at all, and 1.9 has a global interpreter lock. This is not solvable in the application layer (for example, by a framework like Rails): this is a problem inherent to the runtime.
JRuby is fully parallel with no global lock and many people choose it for deployment. Rubinius 2.0, which is in development, will also lack the GIL.
You're also very confused about how threading on Ruby MRI 1.9 works. First of all, pure Ruby code in 1.9 can and does execute on multiple OS threads, in parallel.
Also, the problem that the Ruby VM still has is that while executing native code, it does not allow a context switch unless that code explicitly informs the VM that it can do a context switch. This is in effect how the global interpreter lock works. The gotcha here is that native extensions that are well behaved, can inform the VM that a context switch is possible. For instance, the older "mysql" gem was NOT well behaved, blocking context switches between threads, but the newer mysql2 gem does behave well and works correctly in multi-threading.
Right now, if you start a new Rails 3 app, it will work in a multi-threading environment correctly and modern Rails servers are taking advantage of that, unless you install some older gems that haven't been fixed. The biggest problem is that you can't know what libraries are well behaved, but if that's too much of a burden, JRuby is a fully supported platform for Rails and doesn't share the same issue.
Pure Ruby code in MRI 1.9 can run on multiple OS threads, but it can't run on multiple OS threads in parallel. Here's some reading you can do if you're unsure:
Yes: Ruby can allow native code to execute in parallel with Ruby code (although it doesn't always do so, as you note). But if you're under the impression that multiple Ruby threads can execute in parallel, you're wrong. That may or may not be a problem, depending on what you need Ruby to do.
JRuby can be a good option for parallelism, but it's also slower than Ruby MRI 1.9 and has an ecosystem that most Ruby developers will be unfamiliar with. Regardless, my point was that Rails doesn't magically "work with multithreading," at least for standard Ruby deploys.
Hulu, Living Social, All of 37 Signals, Groupon, AirBnb, Scribd, Zendesk, Soundcloud, etc.
Twitter's scale is unlike nearly every single web app online, so I think the real story with Rails and Twitter isn't that "they had to move away from it for scalability reasons," but rather it's amazing that they were table to leverage Rails for as long as they did."
Also, Twitter is more dropping Ruby all together rather than just Rails specifically. Again, this isn't to say that Ruby isn't a great language that works for most people (it let Twitter grow quickly to where they are today), but at their scale with their demands it doesn't work well.
"The cost increase was the result of detailed engineering studies conducted during the year or so after the initial estimate was released. Among other things, soil testing in the Bay had revealed that bridge pilings would need to be anchored “deeper into bedrock than expected,” she writes."
Now hindsight is 20/20 and I am not an engineer in this field, but it seems that if you're floating an estimate that isn't informed by the engineering studies necessary to give an accurate estimate then you probably shouldn't have given that initial estimate in the first place? Or at least should have given the initial estimate as a range and/or with a huge disclaimer that you might get into researching the bridge and the estimate could cost multiples more?