A client has asked me to replace their video editor for a video podcast. It’s the standard quick-cut style, zooms, loud transitions, and big bubble-letter subtitles throughout.
They recommended using Descript, which looks to be an AI platform that does all the work for you. Throw your video/audio into the site, and it transcribes the video, allowing you to edit based on the transcription. It then makes AI recommendations and inserts zooms and transitions.
There’s no getting around using AI for some of this, like subtitle generation, but I’d rather not buy a sub to an AI platform, nor would I like to use one, so I’m looking for alternatives. The pay certainly isn’t worth the time it would take without cutting corners unfortunately.
Unfortunately, Davinci Resolve isn’t playing well with my system and the nvidia driver I use (580, it worked on 550 but that’s not an option in Additional Drivers anymore for some reason) results in a black screen for the video timeline (not ideal for a video editor haha). I’ve been playing around with Kdenlive and Blender’s video editor.
I found an add-on for both programs that transcribes speech-to-text, which I finally got mostly working with Kdenlive (using whisper) but not with Blender. I also found a FOSS app called audapolis which does well pulling a transciption into an exportable file.
Anyone have any experience making these mass-produced-style videos without going full AI? My client mentioned the old VE spent 1-2 hours with Descript for a 15ish min video and 2 shorts. I’m ok doubling that timeframe at first if it means not using Descript.


Eh… yes there is, you can pay actual humans to do that. In fact if you do “subtitle generation” (whatever that might mean) without any editing you are taking a huge risk. Sure it might get 99% of the words right but it fucks up on the main topic… well good luck.
Anyway, if you do want to go that road still you could try
.mkv? Depends on context obviously)*.srt *.ass *.vtt *.sbvformatsI’d love to pay someone, or I’d just transcribe it myself if it wouldn’t take so long. I’m new to VE so learning as I go, I do audio and the editing process seems fairly transferable, it’s the barragement of movement and transitions in these that I’m struggling to not spend a week working on it. I’m doing this as a favour so outsourcing isn’t an option. I’ll be checking over the subtitles anyway, generating just saves a bunch of time before a full pass over it.
I’d rather not have hardcoded subs at all, but these are the “no attention span” style videos for YT (constant zooms, transitions, big subtitles, etc) that I have to mimic. Honestly I hate the style haha, but it is what it is. The style “gets traction” on social media.
I’m quickly realizing why these videos use AI, it’s a tonne of work without it for very little pay. I was just hoping to use as little of it as possible and trying to avoid going with Descript.
Anyway, appreciate you taking the time, I got some sub generation working with Kdenlive but it’s looking like I either have to bite the bullet with Descript or just transcribe it myself. The editing for the subs generation looks to be as much work as just transcribing a handful of frames at a time.
You may already have the answer from the other comments - but specifically for subtitle transcription, I’ve used whisper and set it to output directly into SRT, which I could then import directly into kdenlive or VLC or whatever, with timecodes and everything. It seemed accurate enough that the editing of the subs afterwards was almost non-existant.
I can’t remember how I installed Whisper in the first place, but I know (from pressing the up arrow in terminal 50 times) that the command I used was:
I was surprised/terrified how accurate the output was - and this was a variety of accents from Northern England and rural Scotland. A few minutes of correcting mistakes only.