1. In monorepo you don't have dependencies. This is the whole point of having a monorepo: everything is included. You can build on airgapped system, no unexpected inputs into your builds, no network partitioning problems. This is the whole reason why people do that. Total control and stability. But you pay for it by having to maybe manage third-party code yourself. By having to use more space. Probably, you will need more infrastructure to side-step tools that cannot be made to not use network etc. Hope this also answers the question about external dependencies: they become internal dependencies.
2. Why you should never use requirements.txt: the tradition of using this approach comes from total misunderstanding of the goals of project deployment. This is surprising and upsetting, because this isn't a difficult concept. This is due to most developers not wanting to understand how infrastructure of their project works, and the willingness to settle on the first "solution" that "worked". The goal of deploying a project must be that byte-compiled Python code is placed in the platlib directory, accompanying data in the data directory and so on. The reliable way to accomplish this is to make a Python package and install it. Requirements.txt plays no role in this process, since package requirements need to be written into META file in the package info directory. Instead, the process that involves using requirements.txt typically ends up installing "something" that at the time of writing allowed the authors of the project to somehow make their code work, due to the combination of such factors as current working directory, some Python path files placed without their knowledge into platlib etc. This is a very fragile setup and projects designed in this way usually don't survive multiple years w/o updates that keep modifying requirements.txt to chase the latest changes in the ever changing environment.
3. pyproject.toml had more potential, in principle, but turned out to be a disaster. The idea behind this contraption was to organize configurations of multiple Python-related tools under one roof. But the chosen format was way too simplistic to realistically replace the configuration of other tools, and the configuration started to slide down two bad directions: either "string programming" (i.e. a technique where strings in the code develop special meaning and parsing), or delegation (i.e. pyproject.toml would contain a minimal code necessary to redirect to the real configuration). Second problem with pyproject.toml esp. when it comes to dependencies: there was no plan on how to solve this problem. No system. So, every tool that decided to support pyproject.toml decided to do it in a way that suits it. So, in the end, it's always better to just use the native configuration format of the tool that does the package building, instead of involving the middle-man: pyproject.toml. It always stands between the developer and their ability to debug the code, to fine tune the tool they need to run. It's the MS Windows registry all over again, but even more poorly executed.
----
So, what should you do instead? -- Well, there's a problem... there aren't any good Python tools :( And, like I mentioned before, the reason is even not the tools / their authors, it's that the underlying design of Python infrastructure is broken. So, you are bound to choose between the bad and the worse.
On the bright side: the problem, when you really understand what needs to be done is very simple. For any individual project you can solve all your deployment and packaging problems very easily writing in any language that can do infrastructure-related work. I've done this multiple times (as a result of frustration with Python's own tools) and had never a reason to regret it.
Do you have no external dependencies at all?