Well, to be fair, the whole shebang is from a completely different company, that have their own ML library and such, so that isn't that surprising. Although I agree that some CUDA shim or similar would be a lot more interesting, still getting to the place of running inference and training with your very own library is pretty dope already.
SOC2 is just "the process we say we have, is what we do in practice". The process can be almost anything. Some auditors will push on stuff as "required", but they're often wrong.
But all it means in the end is you can read up on how a company works and have some level of trust that they're not lying (too much).
It makes absolutely zero guarantees about security practices, unless the documented process make these guarantees.
Yeah, that was my understanding as well, so I fail to see how a proper SOC2 would have prevented this.
I mean ideally a proper SOC2 would mean there are processes in place to reduce the likelihood of this happening, and then also processes to recover from if it did ended up happening.
But the end result could've been essentially the same.
Valid, but for all the crap that LangChain gets it at least has its own layer for upstream LLM provider calls, which means it isn't affected by this supply chain compromise (unless you're using the optional langchain-litellm package). DSPy uses LiteLLM as its primary way to call OpenAI, etc. and CrewAI imports it, too, but I believe it prefers the vendor libraries directly before it falls back to LiteLLM.
Not just as a gateway in a lot cases, but CrewAI and DSPy use it directly. DSPy uses it as its only way to call upstream LLM providers and CrewAI falls back to it if the OpenAI, Anthropic, etc. SDKs aren't available.
Yep, DSPy and CrewAI have direct dependencies on it. DSPy uses it as its primary library for calling upstream LLM providers and CrewAI falls back to it I believe if the OpenAI, Anthropic, etc. SDKs aren't available.
LangChain at least has its own layer for upstream LLM provider calls, which means it isn't affected by this supply chain compromise. DSPy uses LiteLLM as its primary way to call OpenAI, etc. and CrewAI imports it, too, but I believe it prefers the vendor libraries directly before it falls back to LiteLLM.
This is bad, especially from a downstream dependency perspective. DSPy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.
Yep, I think the worst impact is going to be from libraries that were using LiteLLM as just an upstream LLM provider library vs for a model gateway. Hopefully, CrewAI and DSPy can get on top of it soon.
I completely removed nanobot after I found that. Luckily, I only used it a few times and inside a docker container. litellm 1.82.6 was the latest version I could find installed, not sure if it was affected.
Very similar experience to my own. Was a 1P customer for years, but the product declined after the VC purchase and I trust Apple more to get privacy & security right. Apple Passwords accomplishes the minimum of what I want a Password system to do. Yes, 1Password has some nice add-ons like SSH agent integration, secure notes, etc., but some of these aren't necessary or have workarounds as outlined in this post.
I really do wish there was some way to integrate Apple Passwords with Linux, but I don't see that happening. FWIW, iCloud on Windows isn't horrible and has decent Apple Password support as it even works with iCloud Advanced Data Protection now.
I've asked multiple OpenAI employees on X that have been posting about the issue whether or not they will be processing bulk unclassified Americans' data or what will they do when asked since I think it is fair to assume that they have or will receive the same ask that was made of Anthropic. No response, yet. The Head of National Security Partnerships at OpenAI seems to be focused on stating that that the NSA is not able to use the contract. Whether or not that is true, it doesn't address the unclassified bulk data processing concern, which is a form mass surveillance of Americans. Also, not great when at least one OpenAI employee has posted that the DoD "does not conduct domestic surveillance" and only issued a correction after quite a backlash by stating that he was only quoting the Under Secretary of Defense.
"Even as Mr. Trump published the post at 3:47 p.m., the two sides kept talking. Mr. Michael, who was on a call with Anthropic executives at the time, said the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said.
Anthropic told the Pentagon that it was willing to let its technology be used by the National Security Agency for classified material collected under the Foreign Intelligence Surveillance Act. But the company wanted a legally binding promise from the Pentagon not to use its technology on unclassified commercial data."