Discussion:
[Live-devel] Re-streaming RTP as RAW-UDP multicast transported as MPEGTS
Shyam Kaundinya
2018-08-27 00:33:39 UTC
Permalink
I am working on a project that uses the live555 proxy server to receive 4K video streams from a H.265 IP camera sent over a radio link to efficiently redistribute them over RTSP. I am able to connect a VLC client and play the RTSP stream from the proxy server. An additional customer requirement however is to also be able receive the video stream as MPEGTS over multicast UDP (not RTP) for compatibility with some legacy SW. I reviewed the many articles on the live555 forum. Here is what I came up with. It is not clear to me if this is the right approach.

First I modified the RTSP client to request RAW-UDP from the proxy server as described here:
http://lists.live555.com/pipermail/live-devel/2011-November/014016.html

I verified that the SETUP command is sending out transport header as RAW/RAW/UDP. Next I use a BasicUDPSink to send out the data instead of the DummySink in that example. I wrote a quick multicast client (test code) to join the multicast group and dump the data. It see a whole bunch of frames beginning with the the same sequence of bytes. I am thinking these are raw H.265 frames.

Next I added a MPEG2TransportStreamFramer to the incoming data and then send it to the BasicUDPSink as described here:
http://lists.live555.com/pipermail/live-devel/2015-April/019234.html
like this ...

struct in_addr outputAddress;
outputAddress.s_addr = our_inet_addr(outputAddressStr);
portNumBits outputPortNum = 4444;
Port const outputPort(outputPortNum);
unsigned char const outputTTL = 255;
outputGroupsock = new Groupsock(env, outputAddress, outputPort, outputTTL);
unsigned const maxPacketSize = 65536; // allow for large UDP packets
scs.subsession->sink = BasicUDPSink::createNew(env, outputGroupsock, maxPacketSize);

FramedSource* videoES = scs.subsession->readSource();
videoSource = MPEG2TransportStreamFramer::createNew(env, videoES);

scs.subsession->sink->startPlaying(*videoSource, subsessionAfterPlaying, scs.subsession);

Here is what I observe ...
1. I see frames beginning with 0x47 0x01 (as I understand that this is the MPEGTS header) coming into my multicast test client.
2. VLC is unable to play the MPEGTS video stream. I use the URL udp://@<multicast-group-ip-addr>:<portnum>. I do not see any video.
3. Also the frames stop coming into my test multicast client program. The RTSP client seems blocked on select call on the socket inside
BasicUDPSink. I am looking further into why this is happening.

Questions:
1. Am I pursing the right strategy to accomplish my final objective - namely, playing MPEGTS stream over multicast UDP, the video source being the proxy server.
2. If yes, what is the best way to verify that the RAW-UDP data I receive in my RTSP client are indeed H.265 frames ?
3. Also, what the best way to verify that the MPEGTS framing is being sent to the multicast group?
Ross Finlayson
2018-08-27 01:19:41 UTC
Permalink
Post by Shyam Kaundinya
1. Am I pursing the right strategy to accomplish my final objective - namely, playing MPEGTS stream over multicast UDP, the video source being the proxy server.
Perhaps. An alternative approach, of course, would be for your RTSP client application to read directly from the source video stream (i.e., without using a proxy server at all). But presumably you have some reason for wanting to use a proxy server (e.g., to support additional (regular) RTSP video player clients as well?).
Post by Shyam Kaundinya
2. If yes, what is the best way to verify that the RAW-UDP data I receive in my RTSP client are indeed H.265 frames ?
If the source stream is, indeed H.265, then the data that you receive in your RTSP client *will* be H.265 NAL units.

However, for your receiving video player (e.g., VLC) to be able to understand/play the stream, you probably need to prepend the stream with three special H.265 NAL units: The SPS, PPS, and VPS NAL units. See the last two paragraphs of this FAQ entry:
http://live555.com/liveMedia/faq.html#testRTSPClient-how-to-decode-data
Post by Shyam Kaundinya
3. Also, what the best way to verify that the MPEGTS framing is being sent to the multicast group?
I suggest that - before streaming the H.265/Transport Stream data over multicast - you first write it to a file (i.e., using “FileSink” instead of “BasicUDPSink”). Then you can try playing the file (locally) using VLC. If (and only if) that works OK, you can then try streaming it.


And finally, a reminder (to everyone) that if you are using the “LIVE555 Streaming Media” software in a product, you are free to do so, as long as you comply with the conditions of the GNU LGPL v3 license; see:
http://live555.com/liveMedia/faq.html#copyright-and-license


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
Shyam Kaundinya
2018-08-28 14:28:14 UTC
Permalink
Re#1. Yes. I use proxy to support additional clients which are RTSP.

Re#2.
a> In trying to implement the FAQ recommendation of using fmtp_spropvps(),sps,pps and then passing the values to parseSPropParameterSets, I tried to follow the code in H265VideoRTPSink::auxSDPLine and createNew functions and removed the parts of code that look for a fragmenter. When building the format a=fmtp:%d, the sample code uses rtpPayloadType(). It is not clear to me where to get this value from since my subsession's source is a FramedSource*. The RTP header format seems to suggest that this is not a fixed-value. It is case dependent which seems to me means it needs to be extracted from the incoming streaming. Any help or sample would be much appreciated.

b> Do I need to use all the code in the auxSDPLine function? It seems to do build a proper afmtp string. But is sending out just a concatenated, base64Encodes of vps+sps+pps string out sufficient (dropping all the profile, tier stuff) ? It is not clear from the FAQ article.

void preparePropSets(MediaSubsession& scs_subsession)
{
char const* sPropVPSStr = scs_subsession.fmtp_spropvps();
char const* sPropSPSStr = scs_subsession.fmtp_spropsps();
char const* sPropPPSStr = scs_subsession.fmtp_sproppps();

// Parse each 'sProp' string, extracting and then classifying the NAL unit(s) from each one.
// We're 'liberal in what we accept'; it's OK if the strings don't contain the NAL unit type
// implied by their names (or if one or more of the strings encode multiple NAL units).
unsigned numSPropRecords[3];
sPropRecords[0] = parseSPropParameterSets(sPropVPSStr, numSPropRecords[0]);
sPropRecords[1] = parseSPropParameterSets(sPropSPSStr, numSPropRecords[1]);
sPropRecords[2] = parseSPropParameterSets(sPropPPSStr, numSPropRecords[2]);

for (unsigned j = 0; j < 3; ++j) {
SPropRecord* records = sPropRecords[j];
unsigned numRecords = numSPropRecords[j];

for (unsigned i = 0; i < numRecords; ++i) {
if (records[i].sPropLength == 0) continue; // bad data
u_int8_t nal_unit_type = ((records[i].sPropBytes[0])&0x7E)>>1;
if (nal_unit_type == 32/*VPS*/) {
vps = records[i].sPropBytes;
vpsSize = records[i].sPropLength;
} else if (nal_unit_type == 33/*SPS*/) {
sps = records[i].sPropBytes;
spsSize = records[i].sPropLength;
} else if (nal_unit_type == 34/*PPS*/) {
pps = records[i].sPropBytes;
ppsSize = records[i].sPropLength;
}
}
}
}

char const* auxSDPLine(FramedSource* framerSource)
{
// Generate a new "a=fmtp:" line each time, using our VPS, SPS and PPS (if we have them),
// otherwise parameters from our framer source (in case they've changed since the last time that
// we were called):

// Set up the "a=fmtp:" SDP line for this stream.
u_int8_t* vpsWEB = new u_int8_t[vpsSize]; // "WEB" means "Without Emulation Bytes"
unsigned vpsWEBSize = removeH264or5EmulationBytes(vpsWEB, vpsSize, vps, vpsSize);
if (vpsWEBSize < 6/*'profile_tier_level' offset*/ + 12/*num 'profile_tier_level' bytes*/) {
// Bad VPS size => assume our source isn't ready
delete[] vpsWEB;
return NULL;
}
u_int8_t const* profileTierLevelHeaderBytes = &vpsWEB[6];
unsigned profileSpace = profileTierLevelHeaderBytes[0]>>6; // general_profile_space
unsigned profileId = profileTierLevelHeaderBytes[0]&0x1F; // general_profile_idc
unsigned tierFlag = (profileTierLevelHeaderBytes[0]>>5)&0x1; // general_tier_flag
unsigned levelId = profileTierLevelHeaderBytes[11]; // general_level_idc
u_int8_t const* interop_constraints = &profileTierLevelHeaderBytes[5];
char interopConstraintsStr[100];
sprintf(interopConstraintsStr, "%02X%02X%02X%02X%02X%02X",
interop_constraints[0], interop_constraints[1], interop_constraints[2],
interop_constraints[3], interop_constraints[4], interop_constraints[5]);
delete[] vpsWEB;

char* sprop_vps = base64Encode((char*)vps, vpsSize);
char* sprop_sps = base64Encode((char*)sps, spsSize);
char* sprop_pps = base64Encode((char*)pps, ppsSize);

char const* fmtpFmt =
"a=fmtp:%d profile-space=%u"
";profile-id=%u"
";tier-flag=%u"
";level-id=%u"
";interop-constraints=%s"
";sprop-vps=%s"
";sprop-sps=%s"
";sprop-pps=%s\r\n";
unsigned fmtpFmtSize = strlen(fmtpFmt)
+ 3 /* max num chars: rtpPayloadType */ + 20 /* max num chars: profile_space */
+ 20 /* max num chars: profile_id */
+ 20 /* max num chars: tier_flag */
+ 20 /* max num chars: level_id */
+ strlen(interopConstraintsStr)
+ strlen(sprop_vps)
+ strlen(sprop_sps)
+ strlen(sprop_pps);
char* fmtp = new char[fmtpFmtSize];
sprintf(fmtp, fmtpFmt,
rtpPayloadType(), profileSpace,
profileId,
tierFlag,
levelId,
interopConstraintsStr,
sprop_vps,
sprop_sps,
sprop_pps);

delete[] sprop_vps;
delete[] sprop_sps;
delete[] sprop_pps;

return fmtp;
}
======

c> Do I need to send out the prop-sets (VPS+SPS+PPS) before sending out every incoming frame ? Since I am sinking with UDP multicast, there is no concept of "a client establishing a connection. As such, I won't be able to tell when a client starts reading. So it seems to me, I either need to send it periodically or before every frame. If periodic, what is a good time interval to keep resending it.

d> In order to sink the prop-sets, I am using the memmove to the fTo variable in FramedSource. But that is protected. So I am guessing I would either need to add a public accessor function to get it and sink to it as follows :

FramedSource.hh:
[...]
public:
unsigned char* getfTo() { return fTo;}
[...]

My testRTSPClient:
[...]
videoESRawUDP = (FramedSource*) scs.subsession->readSource();
preparePropSets(*scs.subsession);
char const* fmtp = auxSDPLine(videoESRawUDP);

unsigned char* to = videoESRawUDP->getfTo();
memmove(to, fmtp, vpsSize+spsSize+ppsSize);

// After delivering the data, inform the reader that it is now available:
FramedSource::afterGetting(videoESRawUDP);
[...]

e> An alternative to <d>, I am guessing would involve sub-classing a BasicUDPSource object by adding a public member function to extract the fTo field.
Then sink the FramedSource from the subsession into it and then sink the subclassed object into the BasicUDPSink for multicasting. This seems to be the cleaner approach. Is this a valid approach ?

Re#3:
Just to clarify here. The customer requirements don't explicitly specify streaming over MPEGTS. Just that it should be "playable" as RAW-UDP (not RTP). I was under the impression that I would need a container around RAW-UDP for that to be possible and hence chose MPEGTS. Sorry to have mischaracterized this on my earlier post. But, based on your responses so far, it appears that if I send out the prop-set stuff properly, I may not need the MPEGTS wrapper in order for a VLC client to be able to play it. Is my understanding correct ?

Re: licensing terms. Yes. The end product for deployment will have subclasses to any changes to the live555 code. For example, I subclass mediasession to create a new object that requests raw-udp from an RTSP server by "transport" param mods as recommended by your FAQ.

=====

Date: Sun, 26 Aug 2018 18:19:41 -0700
From: Ross Finlayson <***@live555.com>
To: LIVE555 Streaming Media - development & use
<live-***@ns.live555.com>
Subject: Re: [Live-devel] Re-streaming RTP as RAW-UDP multicast
transported as MPEGTS
Message-ID: <2B4F8F35-6927-4D8F-9148-***@live555.com>
Content-Type: text/plain; charset=utf-8
Post by Shyam Kaundinya
1. Am I pursing the right strategy to accomplish my final objective - namely, playing MPEGTS stream over multicast UDP, the video source being the proxy server.
Perhaps. An alternative approach, of course, would be for your RTSP client application to read directly from the source video stream (i.e., without using a proxy server at all). But presumably you have some reason for wanting to use a proxy server (e.g., to support additional (regular) RTSP video player clients as well?).
Post by Shyam Kaundinya
2. If yes, what is the best way to verify that the RAW-UDP data I receive in my RTSP client are indeed H.265 frames ?
If the source stream is, indeed H.265, then the data that you receive in your RTSP client *will* be H.265 NAL units.

However, for your receiving video player (e.g., VLC) to be able to understand/play the stream, you probably need to prepend the stream with three special H.265 NAL units: The SPS, PPS, and VPS NAL units. See the last two paragraphs of this FAQ entry:
http://live555.com/liveMedia/faq.html#testRTSPClient-how-to-decode-data
Post by Shyam Kaundinya
3. Also, what the best way to verify that the MPEGTS framing is being sent to the multicast group?
I suggest that - before streaming the H.265/Transport Stream data over multicast - you first write it to a file (i.e., using ?FileSink? instead of ?BasicUDPSink?). Then you can try playing the file (locally) using VLC. If (and only if) that works OK, you can then try streaming it.


And finally, a reminder (to everyone) that if you are using the ?LIVE555 Streaming Media? software in a product, you are free to do so, as long as you comply with the conditions of the GNU LGPL v3 license; see:
http://live555.com/liveMedia/faq.html#copyright-and-license


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/




------------------------------

Subject: Digest Footer

_______________________________________________
live-devel mailing list
live-***@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


------------------------------

End of live-devel Digest, Vol 177, Issue 12
*******************************************
Ross Finlayson
2018-08-28 19:02:24 UTC
Permalink
Post by Shyam Kaundinya
Re#2.
a> In trying to implement the FAQ recommendation of using fmtp_spropvps(),sps,pps and then passing the values to parseSPropParameterSets, I tried to follow the code in H265VideoRTPSink::auxSDPLine and createNew functions and removed the parts of code that look for a fragmenter.
Sorry, but if you modify the supplied source code, you can expect no support on this mailing list.

But in any case, you won’t use “H265VideoRTPSink” at all, because you’re not streaming via RTP. Instead (as you explained in your earlier email) you want to sent the H.265 NAL units over raw UDP, you should instead use a “BasicUDPSink”.

But as I explained earlier, you should first try writing the Transport Stream data to a file, and trying to play the file, before you try streaming over UDP
Post by Shyam Kaundinya
b> Do I need to use all the code in the auxSDPLine function?
The “auxSDPLine()” function is completely irrelevant for you, because you won’t be creating a SDP description for your outgoing stream (because you won’t be creating a RTSP server for it). (And even if you were, the outgoing stream would just be a Transport Stream, which wouldn’t need any ‘alternative’ lines in the SDP description anyway.)


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
Shyam Kaundinya
2018-08-28 22:23:30 UTC
Permalink
Thank you for your response.

1/ Re: your response
Re: a
I just want to clear the air on the misunderstanding/miscommunication reg. modification of supplied code. I did not modify the supplied source code. I took testRTSPClient.cpp and made my own version of it, which follows the example of functions in H265VideoRTPSink namely createNew that calls fmtp_spropvps(),sps,pps and passes the output to parseSPropParameterSets (the createNew in H265VideoRTPSink uses a fragment but, I don't/won't need that in my RTSPClient for parsingSPropParamSets is what I meant) and auxSDPLine for example of building the propset string (which from your explanation is no longer required). I am not using H265VideoRTPSink for anything else other than as an example reference code for use of spropvps, and parseSPropParameterSets functions that is recommended by the FAQ for decoding received data in RTSPclient : http://www.live555.com/liveMedia/faq.html#testRTSPClient-how-to-decode-data

Re: b
Noted. Thank you.


2/ Since I am sinking with UDP multicast, there is no concept of "a client establishing a connection. As such would I need to send out the prop-sets (VPS+SPS+PPS) before sending out every incoming frame or every i-frame or at some kind of interval as a background handler ?



3/ I was under the impression that I would need a container around RAW-UDP for that to be possible and hence chose to use a MPEGTS framer source. Sorry to have mischaracterized this on my earlier post. But, based on your responses so far, it appears that if I send out the prop-set stuff properly, I may not need the MPEGTS wrapper at all in order for a VLC client to be able to play it. Is my understanding correct ?



Regards

Shyam
Ross Finlayson
2018-08-28 22:35:33 UTC
Permalink
Post by Shyam Kaundinya
2/ Since I am sinking with UDP multicast, there is no concept of "a client establishing a connection. As such would I need to send out the prop-sets (VPS+SPS+PPS) before sending out every incoming frame or every i-frame or at some kind of interval as a background handler ?
I suggest sending them out before each I-frame.
Post by Shyam Kaundinya
3/ I was under the impression that I would need a container around RAW-UDP for that to be possible and hence chose to use a MPEGTS framer source. Sorry to have mischaracterized this on my earlier post. But, based on your responses so far, it appears that if I send out the prop-set stuff properly, I may not need the MPEGTS wrapper at all in order for a VLC client to be able to play it. Is my understanding correct ?
No. All the receiver sees is a Transport Stream (over multicast raw-UDP). Therefore, the SPS,PPS,VPS NAL units need to be fed to the TransportStreamMultiplexor, just like every other H.265 NAL unit.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

Loading...