A client has asked me to replace their video editor for a video podcast. It’s the standard quick-cut style, zooms, loud transitions, and big bubble-letter subtitles throughout.
They recommended using Descript, which looks to be an AI platform that does all the work for you. Throw your video/audio into the site, and it transcribes the video, allowing you to edit based on the transcription. It then makes AI recommendations and inserts zooms and transitions.
There’s no getting around using AI for some of this, like subtitle generation, but I’d rather not buy a sub to an AI platform, nor would I like to use one, so I’m looking for alternatives. The pay certainly isn’t worth the time it would take without cutting corners unfortunately.
Unfortunately, Davinci Resolve isn’t playing well with my system and the nvidia driver I use (580, it worked on 550 but that’s not an option in Additional Drivers anymore for some reason) results in a black screen for the video timeline (not ideal for a video editor haha). I’ve been playing around with Kdenlive and Blender’s video editor.
I found an add-on for both programs that transcribes speech-to-text, which I finally got mostly working with Kdenlive (using whisper) but not with Blender. I also found a FOSS app called audapolis which does well pulling a transciption into an exportable file.
Anyone have any experience making these mass-produced-style videos without going full AI? My client mentioned the old VE spent 1-2 hours with Descript for a 15ish min video and 2 shorts. I’m ok doubling that timeframe at first if it means not using Descript.
Why not use what the client requested
I was hoping to avoid going full AI. Unfortunately these are the YT type videos that AI is completely taking over so unless I want to spend a week on this I think I’ll have to. Was just trying to exhaust every avenue before going that route.
Yeah, I get that. But seems it’s pretty well suited for the task.
You can probably create a similar workflow using comfyui though. But it will require time and effort.
Honestly it’s looking that way. I just needed to try everything else first for my own principles. Appreciate the kindness
There’s no getting around using AI for some of this, like subtitle generation
Eh… yes there is, you can pay actual humans to do that. In fact if you do “subtitle generation” (whatever that might mean) without any editing you are taking a huge risk. Sure it might get 99% of the words right but it fucks up on the main topic… well good luck.
Anyway, if you do want to go that road still you could try
- ffmpeg with whisper.cpp (but honestly I’m not convinced hardcoding subtitles is a good practice, why not package as e.g.
.mkv? Depends on context obviously) - Kdenlive with vosk
- Kdenlive with whatever else via
*.srt *.ass *.vtt *.sbvformats
I’d love to pay someone, or I’d just transcribe it myself if it wouldn’t take so long. I’m new to VE so learning as I go, I do audio and the editing process seems fairly transferable, it’s the barragement of movement and transitions in these that I’m struggling to not spend a week working on it. I’m doing this as a favour so outsourcing isn’t an option. I’ll be checking over the subtitles anyway, generating just saves a bunch of time before a full pass over it.
I’d rather not have hardcoded subs at all, but these are the “no attention span” style videos for YT (constant zooms, transitions, big subtitles, etc) that I have to mimic. Honestly I hate the style haha, but it is what it is. The style “gets traction” on social media.
I’m quickly realizing why these videos use AI, it’s a tonne of work without it for very little pay. I was just hoping to use as little of it as possible and trying to avoid going with Descript.
Anyway, appreciate you taking the time, I got some sub generation working with Kdenlive but it’s looking like I either have to bite the bullet with Descript or just transcribe it myself. The editing for the subs generation looks to be as much work as just transcribing a handful of frames at a time.
You may already have the answer from the other comments - but specifically for subtitle transcription, I’ve used whisper and set it to output directly into SRT, which I could then import directly into kdenlive or VLC or whatever, with timecodes and everything. It seemed accurate enough that the editing of the subs afterwards was almost non-existant.
I can’t remember how I installed Whisper in the first place, but I know (from pressing the up arrow in terminal 50 times) that the command I used was:
whisper FILENAME.MP3 --model medium.en --language English --output_format srt
I was surprised/terrified how accurate the output was - and this was a variety of accents from Northern England and rural Scotland. A few minutes of correcting mistakes only.
- ffmpeg with whisper.cpp (but honestly I’m not convinced hardcoding subtitles is a good practice, why not package as e.g.
i think Kdenlive has subtitle transcription which you can enable and configure in settings.
Finally got that working, had to run with the appimage instead of flatpack for it to work. Now I just gotta see if I can mimc the font haha. Thanks
I know that’s not a ready to use solution but blender has a very powerful python API which should allow you to automate everything including doing calls to a AI backend of your choice if needed.
Interesting. I’m struggling to get transcription add-ons to work in Blender. I’ve never installed python script stuff so I don’t know if I screwed something up. Every time I try transcription it either just stops around 95% or crashes with
Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so} Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor Aborted (core dumped)Do you have a suggestion of where I can get started learning about what you’re talking about?
I think this libcudnn is a Nvidia CUDA thing. I guess you have checked that the correct CUDA libs are installed and blended has permission and knows where to look for them?
First start for learning blender Python API would be it’s documentation: https://docs.blender.org/api/current/index.html
In general you can skip anything that you can do on the user interface. But video editing is just a very small part of this and if you don’t have any programming experience yet this could be overkill for what you are looking for.
Perhaps someone had the same problems like you before and implemented something. Maybe searching explicitly for blender video editing automation or Python API will give you some results.
Honestly I’m new to Linux from about 3 months ago, so it’s been a bit of a learning curve on top to learning VE haha. I didn’t realize CUDA had versions let alone was anything other than an acronym for using GPU (Nvidia for me) and I now figure CUDA is probably why Davinci Resolve isn’t working right. Kdenlive’s search for GPU over CPU had CUDA versions listed (mines 12.0, it was searching for 12.3,4,5 etc) which made me realize CUDA and Nvidia drivers differ.
So long story short, no I haven’t checked that beyond looking for how to update CUDA haha. I really appreciate you taking the time, I’ll look into implementing python next. One thing I love about Linux, I’m constantly learning.


