I gave Notes/Plume a try a year or so ago, it was an interesting experience. I ended up falling back to Joplin as I could use it on macOS, iOS, and Fedora with synchronization via Dropbox.
I've always been curious about productizing apps like these, from a financial/business perspective have you found Daino worthwhile or enough of a success (by your standards) to continue developing it as a proprietary application?
Hi! That put a smile on my face (: I'm working now on a mobile version with real-time sync, so maybe give it another try when it comes out.
Not really, not yet. Once my FOSS app was popular I used to earn a livable amount of money from ads on the website. But after a SEO crash that all went down the drain and the money I'm getting now from subscriptions to Daino Notes is nice but not livable. I've been working the last year (at a really awesome place) doing React programming (my first salary job, actually) and at nights and weekends working on Daino.
I actually got many requests to license Daino Notes' block editor. So I've figured there's a business there. I'm working on something I'm calling Daino Qt which is a collection of different components to accelerate Qt apps development (so I'm also its client). It will include my block editor, components for mobile - current Qt components on mobile are extremly shitty - so I'm planning on changing that with things like native-feeling swipeable stack view, native-feeling text editing, etc. And maybe Qt C++ client SDK for InstantDB (and more stuff).
Hope I can sell this as well while also consuming these components for Daino Notes and other apps I will develop.
I found gitea's interface to be so unusably bad that i switched to full-fat GitLab.
Was this Gitea pre-UI redesign or after? 1.23 introduced some major UI overhauls, with additional changes in the following releases. Forejo currently represents the Gitea 1.22 UI, reminiscent of earlier GitHub design.
eBPF is restricted when booted in a SB environment, but it's not nonfunctional. The default config puts the kernel into "integrity" mode of Kernel Lockdown, which reduces scope of access and enforces read-only usage.
Whether or not the specific functions needed to replicate this tool are impacted is beyond my knowledge.
> on the most demanding real-world production workloads (think Pixar/Weta), which for now it hasn't been.
Super small nit (or info tidbit), but it doesn't take away from your overall message regarding production and scene scale.
Pixar does not and has not used Maya as the primary studio application, it's really only used for asset modeling and some minor shading tasks like UV generation and some Ptex painting. The actual studio app is Presto, which is an in-house tool Pixar has developed over the years since its earliest productions. All other DCCs are team/task specific.
Dreamworks is similar with their tool, Presto, IIRC. Walt Disney Animation Studio (WDAS) does use Maya as the core app last I saw, but I don't know if they've made any headway with evaluating Presto since 2019...
> And these "most people" who are scared of a Python API? Weak! It should have been a low level C API! ;-)
I wouldn't frame it as "scared". The issue is that at a certain scene scale Python becomes the performance bottleneck if that's all you can use.
> You pick a (stable) version, and use that API. It doesn't change if you don't. If it truly is a _major_ project, then constantly "upgrading" to the latest release is a big no-no (or should be)!
This is fine if you only ever have one show in production. Most non-boutique studios have multiple shows being worked on in tandem, be it internal productions or contract bids that require interfacing with other studios. These separate productions can have any given permutation of DCC and plugin versions, all of which the internal pipeline and production engineering teams have to support simultaneously. Apps that provide a stable C/C++ SDK and Python interface across versions are significantly more amenable to these kinds of environments as the core studio hub app, rather than being ancillary, task specific tools.
If you had multiple shows in production, I would expect that standards be set to use the same platforms and versions across the board.
If the company is more than a boutique shop, I would expect them to have a somewhat competent CTO to manage this kind of problem - one that isn't specific to Blender, even!
Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
Small timers, boutique shops, and humble folks like me just try to get by with the tools we can afford.
On a related note, though: I built a Blender plugin with version 2.93 and recently learned it still works fine on Blender 4. The "constantly changing API" isn't the beast some claim it is.
> If you had multiple shows in production, I would expect that standards be set to use the same platforms and versions across the board.
Considering productions span years, not months, artists would never get to use newer tools if studios operated that way. And it really only works if shows share similar end dates, which is not the reality we live in. Productions can start and end at any point in another show's schedule, and newer tools can offer features that upcoming productions can take advantage of. Each show will freeze their stacks, of course, but a studio could be juggling multiple stacks simultaneously each with their own dependency variants (see the VFX Reference Platform).
> Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
That would be the ideal, something that can be difficult to achieve in practice. You'll find small teams of quality engineers overwhelmed with the sheer volume of work, and other larger teams with less experience who don't have enough senior folks to guide them. The industry is far from perfect, but it does generally work.
> But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
And back to reality XD
That being said a number of studios have been reducing their Autodesk spend over the past few years because it's honestly a sick joke the way the M&E division is run. It's a free several hundred million a year revenue earner, but they foist the CAD business operations onto it and the products suffer. Houdini's getting really close, but if another AIO can cover effectively everything in a way that each team sees is better, you will start to see the ramp up of migrations occur. Realistically this comes down to the rigging and animation departments more than any other. But Maya will never go away completely as it'll still need to be used for referring to and opening older projects from productions that used it, beyond just converting assets to a different format. USD is pretty much that intermediary anyways, it's the training and migration effort that becomes the final roadblock.
Blender gives you two paths for extension: a) fork it and layer your changes directly onto the app, or b) you create a plugin via the Blender Python API.
For vendors, the former is obviously a no-go. The latter has the issue of be throttled by Python, so you have to effectively create a shim that communicates with an external library or application that actually performs compute intensive tasks.
Most (if not all) industry DCCs provide a dedicated C++ SDK with Python bindings available if desired.
I'm curious as someone who's thinking of making a blender plug-in that will need to use some native-ish (not C++ though) libraries/modules for performance. What are the issues with using a Python interface instead of a dedicated C++ SDK?
The Python API is limited by Python itself. You're restricted to a GIL environment, so your ability to maximize throughput and reduce latency will be limited. For small/average scenes this may not matter for your addon, however larger scenes will suffer. There are a few popular options to developing Blender functionality:
1. Extend Blender itself. This will net you the maximum performance, but you essentially need to maintain your own custom fork of Blender. Generally not recommended outside of large pipeline environments with dedicated support engineers.
2. Native Python addon. This is what 99% of addons are, just accessing scene data via Blender's Python interface. Drawbacks mentioned above, though there are some helper utilities to batch process information to regain some performance.
3. Hybrid Python Addon. You use the Python API as a glue layer to pass information between Blender and a natively compiled library via Python's C Extension API. With the exception of extracting scene data info, this will give you back the compute performance and host resource scalability you'd get from building on Blender directly. Being able to escape the GIL opens a lot of doors for parallel computation.
> That said, if pressed, I’d recommend AsciiDoc over any Markup flavor for a greenfield project _today_.
Likewise for me as well, and I am a massive Material for MkDocs fan. Markdown is certainly simple to use and gets the job done, but AsciiDoc just provides so much out of the box without hurting my eyes like reStructuredText (used by Sphinx) does. It also helps that's there's effectively one type of AsciiDoc I'm aware of, whereas there's a number of Markdown flavors atop CommonMark to be cognizant of. I will concede, however, it's learning curve is not as simple as MarkDown's...
A powerful framework for working with AsciiDoc for documentation purposes is Antora[0]. The Red Hat ecosystem (Fedora and CentOS projects) uses it for their public facing docs. That being said, it is a beast to understand if starting from scratch rather than contributing to project's existing docs. It designed to be able to consolidate large projects with multiple component repositories and versions per component into a single docs site. Typical balance of more capabilities, more up-front cost of adoption.
The AsciiDoc WG also maintains an Awesome AsciiDoc[1] page of projects within the ecosystem.
I do this as well, but there are a number of service providers that just do not handle subaddressing at all. Like creating an account will result in never receiving a confirmation or verification code because the system failed to parse the address.
I've started using grouped aliases instead for a bunch of things.
I've always been curious about productizing apps like these, from a financial/business perspective have you found Daino worthwhile or enough of a success (by your standards) to continue developing it as a proprietary application?
reply