Discussion:
[Live-devel] RSTP Live streaming from USB camera
Giovanni Iamonte
2014-11-13 13:50:08 UTC
Permalink
Hello



Our goal was to generated an H264 + AAC live streaming starting from an
USB camera and a microphone.

To reach the goal we have used the following chain: USB camera -> ffmpeg
-> RSTP live555.

Finally, the live stream works and when we connect with a client like
VLC or ffplay, we can see the camera shooting.



The only problem that we have is due to fact that we can only have a
limited number of connections (client vlc) and this

number is related to the source's resolution.

If you exceed this number all the VLC clients begin to display
artifacts.



Source's resolution 320 x 240 allows just 6 VLC connections.

Source's resolution 640 x 480 allows just 3 VLC connections.

Source's resolution 1920 x 1080 allows just 1 VLC connection.



We already checks the cpu usage and the bandwidth, the cpu usage is
around 40% and the average bandwich is 1 Mbit.

The OS is windows.



Below, what we did:



1) We used ffmpeg to capture the images from the camera and convert them
to H264 + AAC frames (avcoded)

2) These frames were pushed in a circular queue

3) In a thread we created a RTP Server, the media session and two
subsession, one for the video and the other one audio (see the code
below)

4) Starting from the DeviceSource.cpp we created a source that reads the
frames from the circular queue.



5) When a client connects to the RTPserver, we create a NewStreamSource
and a NewRTPSink. As you can see in the code below,

for the video StreamSource we create a H264VideoStreamDiscreteFramer for
the audio we leave as it is.



Regarding the RTPSink, for the video, we create an H264VideoRTPSink and
for the audio we create MPEG4GenericRTPSink.







I will appreciate any help.



Thanks



Bye





************************************************************************
**********

unsigned long WINAPI Live555Thread (void *param)

{

OutPacketBuffer::maxSize = MAX_FRAME_SIZE;



TaskScheduler *serverTsk = BasicTaskScheduler::createNew();

UsageEnvironment *serverEnv =
BasicUsageEnvironment::createNew(*serverTsk);



RTSPServer *rtspServer = RTSPServer::createNew(*serverEnv,
g_nRTSPServerPort, NULL);

ServerMediaSession *sms;



if (rtspServer == NULL) {

*serverEnv << "LIVE555: Failed to create RTSP server: %s\n",
serverEnv->getResultMsg();

return 0;

}

else {

char const* descriptionString = "Session streamed by
\"QMServer\"";

char RTSP_Address[1024];

RTSP_Address[0]=0x00;



sms = ServerMediaSession::createNew(*serverEnv,
RTSP_Address, RTSP_Address, descriptionString);


sms->addSubsession(Live555ServerMediaSubsession::createNew(VIDEO_TYPE,
*serverEnv, ESTIMATED_VIDEO_BITRATE));


sms->addSubsession(Live555ServerMediaSubsession::createNew(AUDIO_TYPE,
*serverEnv, ESTIMATED_AUDIO_BITRATE));

rtspServer->addServerMediaSession(sms);

}



char* url = rtspServer->rtspURL(sms);

*serverEnv << "Play this stream using the URL \"" << url <<
"\"\n";



for (;;) {

serverEnv->taskScheduler().doEventLoop(&g_cExitThread); //
does not return

if (g_cExitThread)

break;

}



Medium::close(rtspServer);

return 0;

}



************************************************************************
**********



FramedSource* Live555ServerMediaSubsession::createNewStreamSource
(unsigned /*clientSessionId*/, unsigned& estBitrate)

{

estBitrate = fEstimatedKbps;



m_source = Live555Source::createNew(envir(), m_type, false);

if (m_type == VIDEO_TYPE) {

return (H264VideoStreamDiscreteFramer::createNew(envir(),
m_source));



}

else

return m_source;

}



RTPSink* Live555ServerMediaSubsession::createNewRTPSink (Groupsock*
rtpGroupsock, unsigned char /*rtpPayloadTypeIfDynamic*/, FramedSource*
inputSource)

{

OutPacketBuffer::maxSize = MAX_FRAME_SIZE;



if (m_type == VIDEO_TYPE) {

return (H264VideoRTPSink::createNew(envir(), rtpGroupsock,
96));

}

else {

unsigned char audioSpecificConfig[2];

char fConfigStr[10];

audioSpecificConfig[0] = (AUDIO_AAC_TYPE << 3) |
(AUDIO_SRATE_INDEX >> 1);

audioSpecificConfig[1] = (AUDIO_SRATE_INDEX << 7) |
(AUDIO_CHANNELS << 3);

sprintf(fConfigStr, "%02X%02x", audioSpecificConfig[0],
audioSpecificConfig[1]);



return (MPEG4GenericRTPSink::createNew(envir(),
rtpGroupsock, 96, AUDIO_SRATE, "audio", "AAC-hbr", fConfigStr,
AUDIO_CHANNELS));

}

}





________________________________________________________________

Ing. Giovanni Iamonte

Area Tecnologie e sviluppi

Quintetto Srl - Pont Saint Martin (AO)

( mobile: +39 393 9196310

( tel: +39 0165 1845290

+ e-mail: ***@quintetto.it <mailto:***@quintetto.it>

[ web: www.quintetto.it <http://www.quintetto.it/>
Ross Finlayson
2014-11-13 16:42:45 UTC
Permalink
The only problem that we have is due to fact that we can only have a limited number of connections (client vlc) and this
number is related to the source's resolution.
If you exceed this number all the VLC clients begin to display artifacts.
Source's resolution 320 x 240 allows just 6 VLC connections.
Source's resolution 640 x 480 allows just 3 VLC connections.
Source's resolution 1920 x 1080 allows just 1 VLC connection.
Issues like this are almost always caused by running resource limitations (CPU and/or network), rather than any inherent problem with the LIVE555 software.

Note also that (based on the experience of others) running more than one copy of VLC on the same computer tends to perform very poorly, so if you’re testing multiple VLC clients, you should do so on separate computers (separate *physical* computers, not separate ‘virtual machines’).

(Also, a reminder (yet again) that VLC is not our software. The best way to test RTSP client connections is to begin with our “openRTSP” software: <http://www.live555.com/openRTSP/ <http://www.live555.com/openRTSP/>>. Then, and only then, should you use a media player (such as VLC).)
The OS is windows.
That may well be (at least part of) your problem :-( Windows is simply not a serious operating system for running server software. (It’s 2014; noone should be doing this anymore.)
1) We used ffmpeg to capture the images from the camera and convert them to H264 + AAC frames (avcoded)
2) These frames were pushed in a circular queue
3) In a thread we created a RTP Server, the media session and two subsession, one for the video and the other one audio (see the code below)
4) Starting from the DeviceSource.cpp we created a source that reads the frames from the circular queue.
5) When a client connects to the RTPserver, we create a NewStreamSource and a NewRTPSink. As you can see in the code below,
for the video StreamSource we create a H264VideoStreamDiscreteFramer for the audio we leave as it is.
Regarding the RTPSink, for the video, we create an H264VideoRTPSink and for the audio we create MPEG4GenericRTPSink.
This all looks good. One more thing. Because you’re streaming from a ‘live source’, make sure that your “Live555ServerMediaSubsession” constructor - when it calls the “OnDemandServerMediaSubsession” constructor - sets the “reuseFirstSource” parameter to True. (That way, your input source will never be instantiated more than once concurrently, regardless of how many RTSP clients you have.)


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

Loading...