Conversation
| void CJamStreamer::OnStarted() { | ||
| // create a pipe to ffmpeg called "pipeout" to being able to put out the pcm data to it | ||
| pcm = 0; // it is necessary to initialise pcm | ||
| QString command = "ffmpeg -y -f s16le -ar 48000 -ac 2 -i - " + strStreamDest; |
There was a problem hiding this comment.
It might be worth considering hardening the ffmpeg call as it executes a command and includes a variable in the invocation. I fail to find a popen() alternative which takes an array of arguments instead of a string (i.e. like exec*() to system()). Unless I'm missing something, the only remaining hardening measure would be proper quoting + escaping of the variable. One might argue that the variable is currently supposed to come from a trusted context (command line), but things may change in unexpected ways sometimes. Improving this would also ensure that special chars will not break things even in well-intentioned environments.
There was a problem hiding this comment.
Perhaps use
QProcess?
Good idea! This should solve the potential security impact and might also get us cross-platform support for free. :)
There was a problem hiding this comment.
Hm. I'm thinking you're going to want to know the path the ffmpeg binary. Ideally from either a setting or from a command line option. Better not just to launch whatever executable happens to get picked up from "ffmpeg" getting run.
There was a problem hiding this comment.
the only remaining hardening measure would be proper quoting + escaping of the variable
How would I go about and do this? At the moment I just read the output part of ffmpeg from the commandline (which can be anything that ffmpeg understands like "-f mp3 icecast://source::/" or something completely different. I don't know in advance how many arguments will be given in different usecases)
Perhaps use QProcess?
Thanks for the advice! I looked into it and can't see how I could alter my code (I'm not a programmer ;) to do the same as before using QProcess. (popen creates a pipe which I can then stream the pcm data to, does QProcess offer this?) Any hints on what I would be looking for?
Better not just to launch whatever executable happens to get picked up from "ffmpeg" getting run.
I've thought about that before and come to the conclusion that it won't matter for security reasons since you can call anything named "ffmpeg". Allowing the user to specify which program to call doesn't make it more secure in my opinion, since you'd still be able to call anything you want. It'd only be useful if you wanted to call a certain build of ffmpeg besides the one already installed system-wide, but I couldn't think of a usecase where you would want to do that.
There was a problem hiding this comment.
Well, it doesn't crash for me
It does for me only when I connect a client but yes, ffmpeg never gets started it seems.
Maybe we should just stick with the popen version...
I think so, too
There was a problem hiding this comment.
I'm not seeing a segfault either, but I have found three problems which I have fixed in a PR to @npostavs' branch against this PR. :)
There was a problem hiding this comment.
Awesome! Thanks a lot for your work :)
Would that mean I can get rid of the conditional compile for win32 as well? I can't check since I have no windows machine here.
There was a problem hiding this comment.
Would that mean I can get rid of the conditional compile for win32 as well?
I think so, yes, but one should test.
I can't check since I have no windows machine here.
Neither can I (Linux-only here). A first smoke test would probably be removing the ifdefs. CodeQL should probably catch if compiling works. Someone should still run a real test with ffmpeg if possible.
There was a problem hiding this comment.
It doesn't compile on windows whatsoever. I've put the conditional compile back in
79a4505 to
082dca5
Compare
that fixed it, thanks! |
2e9c9a5 to
082dca5
Compare
To avoid redundant copying.
|
Is this ready for final review? |
Make CJamStreamer::process data parameter a reference
Signed-off-by: Christian Hoffmann <mail@hoffmann-christian.info>
- QProcess automatically populates argv[0]. Explicitly passing "ffmpeg" as first argument will end up as argv[1] and will break ffmpeg. - Increase ffmpeg log level to "error" so that startup errors are really output. - Add support for space-delimited multiple parameters for the --streamto argument. Signed-off-by: Christian Hoffmann <mail@hoffmann-christian.info>
Fix streamer2
I'd go with the generic solution as well. This wouldn't make ffmpeg compulsory and would allow for other usecases to be covered. Another advantage to the existing implementations is to be able to stop and start streaming without having to restart (reconfigure) the server to stop/start the stream or change its destination. |
Ah, right, didn't even think of it. Cool! :)
Not sure if we are talking about the same. With "generic PCM-on-stdin streaming" I meant that Jamulus opens a sub-process and streams PCM to stdin of that sub-process. I'm not referring to Jamulus' stdin. This approach is the same with or without Regarding Windows: When using |
|
OK - that's good -- I just worry when I see "stdin" and "process" for Windows... :) |
So, you are using --streamto to make ffmpeg send the stream to an Icecast server. When you connect to the Icecast server, the first clients gets audio data, but the second client won't. Do I understand correctly? Do you have a specific way to reproduce this? Maybe even without Icecast but with local files or anything what one could easily replicate? |
@hoffie |
|
So you're saying the set up is
ffmpeg will only be running during a session. Jamulus and Icecast are running all the time. (1) Right so far? Then a Jamulus client connects to the Jamulus server and ffmpeg connects to Icecast. Then an Icecast client connects to the Icecast server. At this point the Icecast client can hear the Jamulus session. (2) Right so far? The Icecast client disconnects. Another Icecast client connects but now cannot hear audio. (3) Right so far? If that's all correct, I'd say "check the logs" to see what's reported as going on. |
|
@dingodoppelt In addition to what @pljones asked, can you confirm that this behavior does not occur when using some other, non-Jamulus icecast client? I can't really think of anything which would influence this from within Jamulus or the QProcess vs. no-QProcess handling. |
@hoffie: yes, my code from the original pr works. |
|
I have just installed icecast and tried to use this feature (10400f5, the latest state of this PR). After some fiddling with icecast config/authentication, it worked, even with multiple clients on icecast and/or on Jamulus. So, I'm not able to reproduce the issue you are describing. Can you share some more details (icecast/ffmpeg version and config)? I used the default icecast config and configured a single listener mountpoint. Some further things I've observed:
|
👍
Personally, I'd leave this out for now. I need to get a proposal written for better server management. For example, at the moment, the SIGUSR1/2 for the jam recorder isn't portable. If the jam server runs out of space, it simply raises an exception to alert the server operator (that is, it deliberately crashes the server). Neither is particularly nice. So I'm hoping to have something like a separate "management" thread that non-core service can communicate with. That should enable better and deeper status monitoring and control without impacting the main server too much. (I'm hoping a similar approach can be taken with the client, too, to allow a nicer "headless" mode there.) (I live in hope my day job work load will ease a bit so I've brain-space left to work out the details, along with everything else!) |
|
Would be possible to redirect every client channel arriving to server to ReaRoute? (this is related to #1305 @ann0see mentioned a few days ago). |
|
The client can direct all of it's two outputs (i.e. left and right channels) wherever you want. It only has the stereo mix. That's not going to change in the lifetime of "Jamulus" as "Jamulus". If you need a multi-channel, locally mixed solution, you'll need to look elsewhere. |
|
@pljones Thank you for the response. @ann0see explained to me the mixing is done in the server and client only has the stereo mix, that's why I am asking if it can be done in the server (not the client). The same way every channel in the server is redirected to a file when recording audio maybe it could be redirected to a virtual cable in order to be able to mix in a DAW. |
|
The server isn't attached to any audio hardware currently. Doing so would slow it down. Which would mean the number of clients a single server could support would drop. The jam recorder outputs each audio file unsynchronised. If you're trying to get multichannel realtime audio for mixing externally, you need it synchronised, which means keeping in time with an external clock. Again, that's unlikely to happen. I've already said, someone could take the existing jam recorder quite easily and replace each file with a stream. What each stream was attached to would then be up to the user. But they wouldn't be "in time with each other". |
|
Thanks for the clarification about synchronization. And thanks for the efforts too. |
|
I did some tests with Reastream VST plugins on the server side (see #146 (comment) and subsequent comments) but never had the time to make a workable prototype with the server (I even worked out to develop a Wireshark dissector for the protocol https://github.com/WolfganP/reastream-wireshark but that was the end of it due other time consuming tasks) |
|
@dingodoppelt Are you still planning to work on this? If not, would you be ok with someone else taking over? |
|
@hoffie
I thought that this pr might be better off being held back until something like a management thread is introduced. that would make the stream or a chat with the outside modules or plugins. I don't really know if it is wise to merge an in-between since ffmpeg would have to go first ( and i'm still not a programmer ;) |
|
Thanks for your reply. In my opinion, it should be possible to bring this into mergable state with little effort. #967 (comment) outlines what should be done, IMO. I don't know how much demand for this feature there is, but I think we should either move this forward for the next+1 (3.9.0) release or close it for now. While I agree with @pljones that having such a management interface would be really nice, I doubt that it will be as simple as this PR, so it might make take some more time. What do people think? Is there demand? (Maybe show via a thumbs-up reaction on this comment) If there is demand and you (@dingodoppelt) don't plan to move this any further, then someone else could possibly step in and try to finish it? |
|
I think this could go to a feature branch and be discussed later. A management interface etc. could be created - based on #1975 |
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.
The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:
nc localhost $STEREO_MIX_PORT \
| ffmpeg -f s16le -ar 48000 -ac 2 -i - \
-acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL
The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967
By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.
* No need to implement ways to toggle streaming on/off, as streaming
starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.
Co-authored-by: dingodoppelt <dexxter@top-email.net>
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.
The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:
nc localhost $STEREO_MIX_PORT \
| ffmpeg -f s16le -ar 48000 -ac 2 -i - \
-acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL
The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967
By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.
* No need to implement ways to toggle streaming on/off, as streaming
starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.
Co-authored-by: dingodoppelt <dexxter@top-email.net>
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.
The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:
nc localhost $STEREO_MIX_PORT \
| ffmpeg -f s16le -ar 48000 -ac 2 -i - \
-acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL
The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967
By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.
* No need to implement ways to toggle streaming on/off, as streaming
starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.
Co-authored-by: dingodoppelt <dexxter@top-email.net>
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.
The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:
nc localhost $STEREO_MIX_PORT \
| ffmpeg -f s16le -ar 48000 -ac 2 -i - \
-acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL
The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967
By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.
* No need to implement ways to toggle streaming on/off, as streaming
starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.
Co-authored-by: dingodoppelt <dexxter@top-email.net>
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.
The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:
nc localhost $STEREO_MIX_PORT \
| ffmpeg -f s16le -ar 48000 -ac 2 -i - \
-acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL
The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967
By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.
* No need to implement ways to toggle streaming on/off, as streaming
starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.
Co-authored-by: dingodoppelt <dexxter@top-email.net>
|
@dingodoppelt I now pushed it to a feature branch: https://github.com/jamulussoftware/jamulus/tree/feature/ffmpeg-streamer2 therefore closing. I think @dtinth is working on something related to this. |
|
As soon as the management interface (JSON-RPC) is merged, the feature branch could be revived: https://github.com/jamulussoftware/jamulus/blob/feature/ffmpeg-streamer2/README.md |
This code calls ffmpeg to stream a stereo mix directly from the server either into a streaming service like icecast or a file.