i had a similar experience, but nowhere near as catastrophic. losing data like that is brutal! hope it’s not permanent man.
personally, lost o3 access for most of a day due to “suspected misuse” & had to wait on openai to manually review, which felt frustratingly opaque and arbitrary. i was literally mid-meeting, helplessly refreshing the page for like 15 minutes hoping it’d just magically come back.
mentally it fucked w/ me! mostly frustration & helplessness bc i’d specifically scoped out a mini research sprint around o3’s instant & really precise query capabilities, targeting a very niche set of github repos for agentic patterns research. deep research from openai or gemini work decent but slow (~10-15 mins per iteration) & don’t let u iterate queries instantly. the task demanded quick iterations and constant judgment calls (or trial and error with prompting specificity), so having to wait around for deep research runs totally derailed that process.
it made me realize that, right now, if you lose access to that level of real-time, high quality tool use capability—like o3—you either gotta build ur own tooling to get close to that speed/accuracy, or you’re stuck waiting and losing momentum. it’s not impossible but it adds a ton of friction and cognitive overhead.
i opted to wait it out and just not work for the rest of the day instead.
Keeping control, not relinquishing total control over your person, your life, your wellbeing, to entities that care little about yourself, is the only way to go about what happened to you. What is on display is they have a lot of power over you. And you have little power over them. This is the reason I'm loving localhost open source, open weights, open thoughts, open actions, ...open AGI, open ASI. 🥰
Privacy reasons will be mentioned in some form. IMO again and again that's shown to be a red herring. I personally don't care that much. Preferences revealed when we are forced to put a price on how much we care about privacy - reveal that care quantifies close to zero. We usually lie when we say we care. And that's a good thing tbh, that we are instinctive data share maximalists even if unaware of it.
Oldie goodie book on this is "Data for the People" by early AI pioneer Weigend (and early Amazon C.O). Was prescient in the forecasts, aged well, mostly predicted well our current data landscape.
I don't care about privacy, because when done on a societal level, it retards progress—at the most general. Human intelligence is collective. And intelligence is the only remedy I know of, that can upend our stupidity. (that is also collective.)
I think we better accept and internalise "The Bitter Lesson" of Sutton, that may as well be called "The Iron Law of (Data,Compute)" of compute transforming data → information → knowledge → intelligence → agency → ...AGI... → ...ASI. And remove any and all obstacles on that path, the first one being restrictions on data copying. AI may seem risky, but I think not-AI is magnitudes riskier.
Thank you for sharing. I hope they not only give you back access to your account but explain how they made that mistake and how they’ll do better in the future. A major way their products can become “sticky” is by providing features like memory. But if we have to worry about losing access to our data then why spend the effort building that memory? This kind of decision is hurting their business model.
The The Unaccountability Machine: Why Big Systems Make Terrible Decisions gives a very good analysis of this problem.
Mostly just curious why, but they'll never say. Most likely, they'll just say sorry and move on.
Were you on a plus/pro plan? Maybe part of my pro plan is security...
An era when tech feels the most new, but we have the old problems...
Pro, and yeah I'd dearly love to know!
i had a similar experience, but nowhere near as catastrophic. losing data like that is brutal! hope it’s not permanent man.
personally, lost o3 access for most of a day due to “suspected misuse” & had to wait on openai to manually review, which felt frustratingly opaque and arbitrary. i was literally mid-meeting, helplessly refreshing the page for like 15 minutes hoping it’d just magically come back.
mentally it fucked w/ me! mostly frustration & helplessness bc i’d specifically scoped out a mini research sprint around o3’s instant & really precise query capabilities, targeting a very niche set of github repos for agentic patterns research. deep research from openai or gemini work decent but slow (~10-15 mins per iteration) & don’t let u iterate queries instantly. the task demanded quick iterations and constant judgment calls (or trial and error with prompting specificity), so having to wait around for deep research runs totally derailed that process.
it made me realize that, right now, if you lose access to that level of real-time, high quality tool use capability—like o3—you either gotta build ur own tooling to get close to that speed/accuracy, or you’re stuck waiting and losing momentum. it’s not impossible but it adds a ton of friction and cognitive overhead.
i opted to wait it out and just not work for the rest of the day instead.
It's back for me. Just an interesting day or two ...
Do you rely heavily on OpenAI memory? I guess I’m curious on why just creating a new account is out of the question.
It’s my account. With my projects. And my Google login. I don’t want to keep creating new accounts because the org screwed up.
Keeping control, not relinquishing total control over your person, your life, your wellbeing, to entities that care little about yourself, is the only way to go about what happened to you. What is on display is they have a lot of power over you. And you have little power over them. This is the reason I'm loving localhost open source, open weights, open thoughts, open actions, ...open AGI, open ASI. 🥰
Privacy reasons will be mentioned in some form. IMO again and again that's shown to be a red herring. I personally don't care that much. Preferences revealed when we are forced to put a price on how much we care about privacy - reveal that care quantifies close to zero. We usually lie when we say we care. And that's a good thing tbh, that we are instinctive data share maximalists even if unaware of it.
Oldie goodie book on this is "Data for the People" by early AI pioneer Weigend (and early Amazon C.O). Was prescient in the forecasts, aged well, mostly predicted well our current data landscape.
I don't care about privacy, because when done on a societal level, it retards progress—at the most general. Human intelligence is collective. And intelligence is the only remedy I know of, that can upend our stupidity. (that is also collective.)
I think we better accept and internalise "The Bitter Lesson" of Sutton, that may as well be called "The Iron Law of (Data,Compute)" of compute transforming data → information → knowledge → intelligence → agency → ...AGI... → ...ASI. And remove any and all obstacles on that path, the first one being restrictions on data copying. AI may seem risky, but I think not-AI is magnitudes riskier.
I'm off to press the "request a copy of your data" button that Anthropic still provided as of last week. I hope it's still there.
Really strange- were you making fun of Sam?
Nope
Thank you for sharing. I hope they not only give you back access to your account but explain how they made that mistake and how they’ll do better in the future. A major way their products can become “sticky” is by providing features like memory. But if we have to worry about losing access to our data then why spend the effort building that memory? This kind of decision is hurting their business model.