I wonder if you can up the 10kloc if you have a good static analysis of your tool (I vibecoded one in Python) and good tests. Sometimes good tests aren't possible since there are too many different cases but with other forms of codes you can cover all the cases with like 50 to 100 tests or so
From my reading of what you said, you think comp is important and so are other things. You outlined those a bit too, but I already forgot them.
Quite frankly, I think some people here are too quickly spooked and think what you say is sus. I simply see that as a sign that they aren't fully having a good faith discussion. Or they simply read things way differently than I do.
I'm simply writing this because I think there are enough people that have a similar reading to what you wrote. They simply don't mention it as people who feel "outraged" (a bit too dramatic of a term but English is my 2nd language). "Outraged" people seem simply more vocal to me.
For clarity: I feel neutral about this whole thing. I do appreciate the work you've done in the past.
Product engineer turned growth engineer with 5 years of experience in software engineering and 2 years of experience in teaching it. Pair me with marketing experts and I'll multiply their output leading to more revenue.
This year I turned a new leaf. I've solved growth marketing automation challenges and I love it. I walk up to a marketing expert, ask them what their biggest challenge is and automate it (usually). I've automated thousands of hours away for marketing professionals, it's almost as if each specialist is a team of their own.
Achieving this has been a mix of project management, data analysis (Jupyter/Marimo), AI engineering (LLM APIs), data engineering (Airflow) and my (old wheelhouse) full-stack web development.
My ambition, a few years down the line, is to create a startup myself doing both growth marketing and development. I will give my all.
Location: Amsterdam
Remote: preferably remote or hybrid
Willing to relocate: yes
Technologies: ReactJS, Python (Flask), JavaScript (NodeJS), MCP, Ahrefs API and many many more things.
Résumé/CV: on request
Email: see my profile
Yea I know. I once went into MBTI in the vein of "it's not scientific but can I learn something useful out of it?" I tend to test close to ENFP/ENTP. I can notice tendencies of both in me. Then I went on the ENFP subreddit as I suspected many had ADHD and simply asked in a poll. A lot of them said that they did, as I suspected as I'm subclinical myself (and it becomes clinical real fast I even just sleep for 6 hours on one night).
So I learned that you can definitely glean some insights from it. One insight I have is: I'm a "talk out loud thinker". I don't really value that as an identity thing but it is definitely something I notice that I do. I also think a lot of things in my mind, but I tend to think out loud more than the average person.
So yea, that's how pseudo science can sometimes still lead to useful insights about one particular individual. Same thing with philosophy really, usually also not empirically tested (I do think it has a stronger academic grounding but to call philosophy a science is... a bit... tricky... in many cases. I think the common theme is that it's also usually not empirically grounded but still really useful).
Rigor helps for better insights about data. That can help for entrepreneurship.
What also can help for entrepreneurship is having a bias for action. So even if your insights are wrong, if you act and keep acting you will keep acting then you will partially shape reality to your will and bend to its will.
So there are certain forces where you can compensate for your lack of rigor.
The best companies have both of those things by their side.
I mean he mentioned it in IMO too harsh of a way (e.g. “pathetic”) but I do think it raises the point: if you don’t own up to your actions then how can you be held accountable to anything?
Unless we want to live in a world where accountability is optional, I think taking responsibility for your actions is the only choice.
And to be honest, today I don’t know where we stand on this. It seems a lot of people don’t care enough about accountability but then again a lot of people do. That’s just my take.
I mean we're only human. We all make mistakes. Sure, some mistakes are worse than others but in the abstract, even before AI, who hasn't sent an email that they later regretted?
Yes, we all make mistakes. But when I make mistakes when sending an email you can be damn sure that they are my own mistakes which I take full accountability for.
Yes, thank you. I used "pathetic" in the meaning of something which makes feel sorry for them, not something despicable. I fully expect people to stand by what they write and not blame AI etc, but my comment came across as too aggressive.
> in the meaning of something which makes feel sorry for them
I've been speaking English as a second language since I was 12 but I completely overlooked one could use it that way. I guess they don't say it like that a lot in Hollywood, video games or... the internet.
It can be. It can also not be. A friend of mine had a PITA boss. Thanks to ChatGPT he salvaged his relationship with him even though he hated working with him.
He went on to something else but his stress levels went way down.
All this is to say: I agree with you if the human connection is in good faith. If it isn’t then LLMs are helpful sometimes.
It sounds like that relationship was not supposed to be salvaged to begin with. ChatGPT perhaps prolonged your friend's suffering, who ended up moving on in the end. Perhaps unnecessarily delayed.
My knee-jerk reaction is that outsourcing thinking and writing to an LLM is a defeat of massive proportions, a loss of authenticity in an increasingly less authentic world.
On the other hand, before LLMs came along, didn't we ask a friend or colleague for their opinion on an email we were about to write to our boss about an important professional or personal matter?
I have been asked several times to give advice on the content and tone of emails or messages that some of my friends were about to send. On some occasions, I have written emails on their behalf.
Is it really any different to ask an LLM instead of me? Do I have a better understanding of the situation, the tone, the words, or the content to use?
Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently.
Secondly, I think when a friend is giving advice the responses are more likely to be advice, i.e. more often generalities like "you should emphasize this bit of your resume more strongly" or point fixes to grammar errors, partly because that's less effort and partly because "let me just rewrite this whole thing the way I would have written it" can come across as a bit rude if it wasn't explicitly asked for. Obviously you can prompt the LLM to only provide critique at that level, but it's also really easy to just let it do a lot more of the work.
But if you know you're prone to getting into conflicts in email, an LLM powered filter on outgoing email that flagged up "hey, you're probably going to regret sending that" mails before they went out the door seems like it might be a helpful tool.
"Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently."
- I find this a point in favor of LLM and not a flaw. It is a philosophical stance, one for which what does not require effort or time is intrinsically not valuable (see using GLP peptides vs sucking it up for losing weight). Sure, it requires effort and dedication to clean your house, but given the means (money), wouldn't you prefer to have someone else clean your place?
"Secondly, I think when a friend is giving advice the responses are more likely to be advice"
- You can ask an LLM for advice instead of writing directly and without further reflection on the writing provided by the model.
Here I find parallels with therapy, which in its modern version, does not provide answers, but questions, means of investigation, and tools to better deal with the problems of our lives.
But if you ask people who go to therapy, the vast majority of them would much prefer to receive direct guidance (“Do this/don't do that”).
In the cases in which I wrote a message or email on behalf of someone else, I was asked to do it: can you write it for me, please? I even had to write recommendation letters for myself--I was asked to do that by my PhD supervisor.
I wasn't arguing that getting LLMs to do this is necessarily bad -- I just think it really is different from having in the past been able to ask other humans for help, and so that past experience isn't a reliable guide to whether we might find we have problems with unexpected effects of this new technology.
If you are concerned about possible harms in "outsourcing thinking and writing" (whether to an LLM or another human) then I think that the frequency and completeness with which you do that outsourcing matters a lot.
It can become an indispensable asset over time, or a tool that can be used at certain times to solve, for example, mundane problems that we have always found annoying and that we can now outsource, or a coaching companion that can help us understand something we did not understand before. Since humans are naturally lazy, most will default to the first option.
It's a bit like the evolution of driving. Today, only a small percentage of people are able to describe how an internal combustion engine works (<1%?), something that was essential in the early decades after the invention of the car. But I don't think that those who don't understand how an engine works feel that their driving experience is limited in any way.
Certainly, thinking and reasoning are universal tools, and it could be that in the near future we will find ourselves dumber than we were before, unable to do things that were once natural and intuitive.
But LLMs are here to stay, they will improve over time, and it may well be that in a few decades, the human experience will undergo a downgrade (or an upgrade?) and consist mainly of watching short videos, eating foods that are engineered to stimulate our dopamine receptors, and living a predominantly hedonistic life, devoid of meaning and responsibility. Or perhaps I am describing the average human experience of today.
reply