banner



How To Format The Rtsp Address For Ip Camera Sources

Raspberry Pi's are wonderful piffling computers, only sometimes they lack the umph to get stuff done. That may change with the new Raspberry Pi 4, but what to do with all those one-time ones? Or how nigh that pile of old webcams? Well this article volition help turn all those into a full on security organization. (Can also use a raspberry pi photographic camera if you got 1!)

Other posts I have read on this field of study often only use motion to capture detection events locally. Sometimes they go a bit further and fix the Raspberry PI to stream MJPEG every bit an IP photographic camera. Or set up MotionEyeOS and get in into a singular video surveillance system.

With our IP camera, we are going to take it further and encode the video stream locally. So we will send it over the network via rtsp. This volition save huge amounts of bandwidth! It also does not require the client to re-encode the stream before saving, distributing the piece of work. That way we tin also hook information technology into a larger security suite without draining whatsoever of its resources, in this case I volition use Blue Iris.

Now, the showtime thing I am going to do is discourage you. If you don't already take a Pi and a webcam or pi camera for the cause, don't run out to buy them just for this. It'due south just not economic. A 1080p WiFi camera that has has ONVIF capabilities can exist had for less than $50. And then why practice this at all? Well because one.) It's all nether your control and no worry most Chinaware, 2.) If you already got the equipment, it's another free security heart, and three.) Information technology's fun.

Update: If yous're just looking for results, check out my helper script that does all the work for you!

wget https://raw.githubusercontent.com/cdgriffith/pi_streaming_setup/master/streaming_setup.py sudo python3 streaming_setup.py --rtsp

Standard Raspbian setup

Non going to get into also much particular here. If you haven't already, download Raspbian and go it onto a SD Carte. (I used raspbian buster for this tutorial) If yous aren't going to connect a display and keyboard to it, make sure to add together an empty file named ssh on the root of the boot (SD Menu) bulldoze. That way yous can but SSH to the raspberry pi via command line or PuTTY on Windows.

# Default settings host: raspberrypi  username: pi password: raspberry

Recall to run sudo raspi-config, change your password and don't forget to ready wifi, then reboot. Also, practiced idea to update the organization before continuing.

sudo apt update --prepare-missing sudo apt upgrade -y  sudo reboot        

Install a node rtsp server

To start with, nosotros need a place for ffmpeg to connect to for the rtsp connection. Most security systems expect to connect to a rstp server, instead of listening equally a server themselves, then we demand a middleman.

In that location are a lot of rstp server options out in that location, I wanted to get with a lightweight one we can merely run on the pi itself that is easy to install and run easily. This is what I run at my own house, so don't think I'm skimping out for this post 😉

UPDATE: I accept stopped using the below server, and instead use rtsp-simple-server, which has ARM builds per-compiled. This is what is used with the helper script. (Non because the Node based one gave me problems, but this other one is much more lightweight and piece of cake to install.)

Beginning of, we demand to install Node JS. The easiest mode I have found is to use the pre-created scripts to add the proper package links to the apt system for us.

If you are on an arm6 based system, such as the pi zip, yous will need to practise only a little actress work to install Node. For arm7 systems, like anything Raspberry Pi 3 or newer, we will apply Node 12. Find out your arm version with uname -a command and seeing if the string "arm6" or "arm7" appears.

Now, lets install Node JS and other needed libraries, such equally git and coffeescript. If you want to view the script itself before running it, it is bachelor to view here.

curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -  sudo apt-get install -y nodejs git sudo npm install -g coffeescript

One time that is complete, we want to download the node rtsp server code and install all information technology's dependencies. Note, I am assuming you are doing this in the root of your abode folder, which volition later utilize as the base for the directory for the service.

cd ~  git clone https://github.com/iizukanao/node-rtsp-rtmp-server.git --depth one cd node-rtsp-rtmp-server npm install -d

Now you should be skillful to become, you can test it out by running:

sudo coffee server.java

It takes virtually 60 seconds or more to first up, so give it minute before you lot will run into any text. Case output is below.

2019-12-16 14:24:18.465 attachRecordedDir: dir=file app=file (node:6812) [DEP0005] DeprecationWarning: Buffer() is deprecated ... 2019-12-xvi xiv:24:eighteen.683 [rtmp] server started on port 1935 2019-12-16 14:24:18.691 [rtsp/http/rtmpt] server started on port 80

Simple make sure it starts up and and then you can terminate it by hitting Ctrl+c At this bespeak you can besides get into the server.coffee file and edit it to your hearts content, nevertheless I do keep it standard myself.

Create rtsp server service

You probably want this to e'er start this on kicking, so lets add together it as a systemd service. Copy and paste the following code into /etc/systemd/system/rtsp_server.service.

# /etc/systemd/system/rtsp_server.service  [Unit of measurement] Clarification=rtsp_server After=network.target rc-local.service  [Service] Restart=always WorkingDirectory=/home/pi/node-rtsp-rtmp-server ExecStart=java server.coffee  [Install] WantedBy=multi-user.target

At present we can kickoff information technology up via the service, and enable it to start on kick.

sudo systemctl showtime rtsp_server # Can make sure it works with sudo systemctl status rtsp_server sudo systemctl enable rtsp_server

Compile FFMPEG with Hardware Acceleration

If y'all are just using the raspberry pi camera, or another one with h264 or h265 built in support, you tin can use the distribution version of ffmpeg instead.

UPDATE: The congenital in FFmpeg now had hardware acceleration built in, so you can skip the compilation, or employ my helper script to compile information technology for yous with a lot of extras.

This is going to take a while to brand. I suggest reading a good blog post or watching some Red vs Blue while it builds. This guide is just small modifications from another ane. We are likewise adding libfreetype font package and so we can add text (like a datetime) to the video stream, too as the default libx264 so that we tin can use it with the Pi Photographic camera if yous have one.

sudo apt-get install libomxil-bellagio-dev libfreetype6-dev libmp3lame-dev checkinstall libx264-dev fonts-freefont-ttf libasound2-dev -y cd ~ git clone https://github.com/FFmpeg/FFmpeg.git --depth 1 cd FFmpeg sudo ./configure --arch=armel --extra-libs="-lpthread -lm" --extra-ldflags="-latomic" --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-nonfree --enable-libfreetype --enable-libx264 --enable-libmp3lame --enable-mmal --enable-indev=alsa --enable-outdev=alsa  # For old hardware / Pi cipher remove the `-j4`  sudo make -j4

When that is finally done, run the steps below that will install it. We take the additional precaution of turning it into a standard organisation package and concur it so we don't overwrite our ffmpeg version.

sudo checkinstall --pkgname=ffmpeg -y  sudo apt-marker hold ffmpeg  repeat "ffmpeg hold" | sudo dpkg --gear up-selections

Figure out your camera details

If you lot haven't already, plug the webcam into the raspberry pi. Then we are going to use video4linux2 to discover what it's capable of.

v4l2-ctl --list-devices

Mine lists my webcam and 2 paths information technology'southward located at. Sometimes a photographic camera volition have multiple devices for unlike types of formats information technology supports, then it'south a good idea to check each ane out.

Microsoft® LifeCam Picture palace(TM): (usb-3f980000.usb-1.two):         /dev/video0         /dev/video1

Now we need to come across what resolutions and FPS it can handle. Be warned MJPEG streams are much more taxing to encode them some of their counterparts. In this example we are going to specifically effort to discover YUYV 4:ii:2 streams, as they are a lot easier to encode. (Unless you see h264, then use that!)

In my modest testing group, MJPEG streams averaged just lxx% of the FPS of the YUYV, while running the CPU up to 60%. Comparatively, YUYV encoding only took twenty% of the CPU usage on boilerplate.

v4l2-ctl -d /dev/video0 --listing-formats-ext

This pumps out a lot of info. Basically y'all want to observe the subset under YUYV and figure out which resolution and fps you want. Here is an example of some of the ones my webcam supports.

ioctl: VIDIOC_ENUM_FMT         Type: Video Capture          [0]: 'YUYV' (YUYV four:2:2)                 Size: Discrete 640x480                         Interval: Discrete 0.033s (thirty.000 fps)                         Interval: Detached 0.050s (twenty.000 fps)                         Interval: Detached 0.067s (xv.000 fps)                         Interval: Discrete 0.100s (ten.000 fps)                         Interval: Discrete 0.133s (7.500 fps)                 Size: Detached 1280x720                         Interval: Discrete 0.100s (ten.000 fps)                         Interval: Discrete 0.133s (7.500 fps)                 Size: Discrete 960x544                         Interval: Discrete 0.067s (15.000 fps)                         Interval: Discrete 0.100s (ten.000 fps)                         Interval: Discrete 0.133s (7.500 fps)

I am going to exist using the max resolution of 1280×720 and the highest fps of ten. Now if it looks perfect as is, you can skip to the side by side section. Though if you lot demand to tweak the brightness, contrast or other camera settings, read on.

Image tweaks

Allow's figure out what settings we can play with on the photographic camera.

v4l2-ctl -d /dev/video0 --all
brightness (int)                : min=thirty max=255 step=1 default=-8193 value=135 dissimilarity (int)                  : min=0 max=x step=i default=57343 value=5 saturation (int)                : min=0 max=200 step=i default=57343 value=100 power_line_frequency (menu)     : min=0 max=2 default=two value=2 sharpness (int)                 : min=0 max=50 footstep=1 default=57343 value=27 backlight_compensation (int)    : min=0 max=10 step=1 default=57343 value=0 exposure_auto (card)            : min=0 max=3 default=0 value=3 exposure_absolute (int)         : min=5 max=20000 footstep=1 default=156 value=156 pan_absolute (int)              : min=-201600 max=201600 step=3600 default=0  tilt_absolute (int)             : min=-201600 max=201600 step=3600 default=0  focus_absolute (int)            : min=0 max=40 step=ane default=57343 value=12 focus_auto (bool)               : default=0 value=0 zoom_absolute (int)             : min=0 max=10 pace=1 default=57343 value=0        

Plenty of options, fantabulous. Now, if y'all don't have a method to look at the camera display simply yet, come back to this office after you have the live stream going. Y'all tin can alter these settings while it is going thankfully.

v4l2-ctl -d /dev/video0 --fix-ctrl <setting>=<value>

The main problems I had with my camera was that it was a little nighttime and liked to auto-focus every v~10 seconds. Then I added the following lines of code to my rc.local file, but there are various mode to run commands on startup.

#  I added these lines right earlier the go out 0  # dirty hack to brand sure v4l2 has time to initialize the cameras slumber x   v4l2-ctl -d /dev/video0 --prepare-ctrl focus_auto=0 v4l2-ctl -d /dev/video0 --gear up-ctrl focus_absolute=12 v4l2-ctl -d /dev/video0 --ready-ctrl brightness=135

Now onto the fun stuff!

Real Fourth dimension Encoding

Now we are going to utilize hardware accelerated ffmpeg library h264_omx to encode the webcam stream. That is, unless yous happen to already be using a camera that supports h264 already. Like the built-in raspberry pi camera. If you are lucky enough to accept one, you can just copy the output straight to the rtsp stream.

# Only for cameras that support h264 natively! ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:5 copy -an -f rtsp rtsp://localhost:80/live/stream

If at any betoken you receive the error ioctl(VIDIOC_STREAMON) failure : 1, Operation non permitted , go into raspi-config and up the video memory (retentivity split) to 256 and reboot.

In the code below, make sure to change the -s 1280x720 to your video resolution (can also utilize -video_size instead of -due south) and both -r ten occurrences to your frame rate (tin also use -framerate).

ffmpeg -input_format yuyv422 -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -c:v h264_omx -r 10 -b:5 2M -an -f rtsp rtsp://localhost:80/live/stream

And then lets restriction this downwards. The outset function is telling ffmpeg what to expect from your device, -i /dev/video0. Which means all those arguments must become earlier the annunciation of the device itself.

-input_format yuyv422 -f video4linux2 -due south <your resolution> -r <your framerate>

We are making clear that we but want the yuyv format every bit information technology is best available for my two cameras, yours may be different. And then specifying what resolution and fps nosotros want it at. Be warned, that if you fix ane of them wrong, it may seem like information technology works (all the same encodes) only will give an error bulletin to look out for:

[video4linux2] The V4L2 commuter changed the video from 1280x8000 to 1280x800 [video4linux2] The driver changed the time per frame from 1/30 to 1/10

The adjacent section is our conversion parameters.

-c:five h264_omx -r <your framerate> -b:v 2M

Here -c:v h264_omx nosotros are saying the video codex to use h264, with the special omx hardware encoder. We are then telling information technology what the frame rate will be out as well, -r 10, and specifying the quality with -b:five 2M (aka bitrate) which determines how much bandwidth will be used when transmitting the video. Play effectually with different settings like -b:v 500k to see where you want it to be at. You volition demand a higher bitrate for higher resolution and framerate, and a lot less for lower resolution.

After that, we are telling information technology to disable audio with -an for the moment. If yous do want audio, at that place is an optional department below going over how to enable that.

-f rtsp rtsp://localhost:eighty/live/stream

Finally nosotros are telling it where to send the video, and to send information technology in the expected rtsp format (rstp is the video wrapper format, the video itself is still mp4). Detect that with the rstp server we can have equally many camera with their own sub url, so instead of alive/stream at the end could be live/camera1 and live/camera2.

Adding audio

Optional, not included in my final service script

As most webcams have born microphones, information technology makes it piece of cake to add it to our stream if you want. Starting time we need to identify our sound device.

arecord -l

Y'all should get a list of possible devices back, in this example just my webcam is showing up as expected. If you take more than than 1, brand certain yous check out the ffmpeg article on "surviving the reboot" so they don't get randomly re-ordered.

**** Listing of CAPTURE Hardware Devices **** bill of fare 1: CinemaTM [Microsoft® LifeCam Cinema(TM)], device 0: USB Audio [USB Audio]   Subdevices: 0/1   Subdevice #0: subdevice #0

Discover information technology says card ane at the very beingness of the webcam, and specifically device 0, that is the ID nosotros are going to use to reference it with ffmpeg. I'yard going to show the full command offset similar before and suspension it down again.

ffmpeg -input_format yuyv422 -f video4linux2 -s 1280x720 -r x -i /dev/video0 -f alsa -ac 1 -ar 44100 -i hw:1,0 -map 0:0 -map one:0 -c:a aac -b:a 96k -c:5 h264_omx -r x -b:v 2M -f rtsp -rtsp_transport tcp rtsp://127.0.0.1:80/live/webcam

So to start with, we are adding a new input of type ALSA ( Avant-garde Linux Sound Compages) -f alsa -i hw:1,0. Because it's a webcam, which by and large only has a single channel of sound (aka mono), it needs -air conditioning 1 passed to information technology as it by defaults tries to interpret it equally stereo (-air conditioning 2). Information technology y'all become the mistake cannot set channel count to 1 (Invalid argument) that means it probably actually does have stereo, and so y'all can remove it or set it to -air conditioning 2.

Finally, I am setting a custom sampling charge per unit of 44.1kHz, -ar 44100, the same used on CDs. All that giving u.s.a. the new input of -f alsa -ac i -ar 44100 -i hw:1,0 .

Next nosotros do a custom mapping to make sure our output streams are set up up as we wait. Now ffmpeg is normally pretty good about doing this by default if we accept a single input with video, and a single input with audio, this is really only to make certain that nobody out there has weird issues. -map 0:0 -map 1:0 is proverb that we want the first track from the beginning source 0:0 and the kickoff track from the 2nd source 1:0.

Finally our encoding for the audio is set up with -c:a aac -b:a 96k which is saying to apply the AAC sound type, with a bitrate of 96k. At present this could be a lot higher, every bit the theoretical bitrate of this source is now 352k (sample charge per unit 10 fleck depth X channels), only I tin't tell the departure past 96k with my mic is why I stuck with that.

One gotcha with sound, is that if the ffmpeg encoding can't keep up with the source, aka the fps output isn't the same every bit the input, the audio will probably skip weirdly, and so you may need to footstep it down to a lower framerate or resolution if it can't keep upward.

Calculation timestamp

Optional, but is included in my service script

This is optional, but I find information technology handy to directly add together the current timestamp to the stream. I likewise like to take the timestamp in a box and then I can always read it in example the background is close to the same color every bit the font. Here is what we are going to add into the centre of our ffmpeg command.

-vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:text='%{localtime}':x=8:y=8:fontcolor=white: box=1: boxcolor=blackness"

Information technology's a lot text, simply pretty cocky explanatory. We specify which font file to use drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf, that the text will exist the local fourth dimension (make sure you have gear up your locale right!). Next we are going to start the box eight pixels in and 8 pixels down from the top left corner. And then nosotros set the font'southward color, and that it will have a box around information technology with a different color.

ffmpeg -input_format yuyv422 -f video4linux2 -s 1280x720 -r x -i /dev/video0 -c:5 h264_omx -r 10 -b:v 2M -vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:text='%{localtime}':ten=8:y=8:fontcolor=white: box=one: boxcolor=black" -an -f rtsp rtsp://localhost:80/live/stream

Making information technology systemd service set

When running ffmpeg as a service, you probably don't desire to pollute the logs with standard output info. I also had a random issue with it trying to read info from stdin when a service, and then I also added the -nostdin for my own sake. You tin can add these at the start of the command.

-nostdin -hide_banner -loglevel error

You tin hide even more if you want to upwardly it to -loglevel panic, only I personally want to come across any errors that come upwards just in case.

So now our full command is pretty hefty.

ffmpeg -nostdin -hide_banner -loglevel error -input_format yuyv422 -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -c:v h264_omx -r 10 -b:v 2M -vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:text='%{localtime}:10=8:y=8:fontcolor=white: box=one: boxcolor=black" -an -f rtsp rtsp://localhost:eighty/alive/stream

Our new full command is a lot in one line, simply it gets the job done!

Viewing the stream

When you take the stream running, you can pull up VLC or other network enabled media players and point to rtsp://raspberrypi:eighty/live/stream (if yous changed your hostname, will take to do it based off ip).

When you have the control massaged exactly how you want it, we are going to create a systemd file with it, simply like we did for the rstp server. In this case we volition salve it to the file /etc/systemd/arrangement/encode_webcam.service and we volition also add together the argument -nostdin right after ffmpeg prophylactic. sudo 6 /etc/systemd/arrangement/encode_webcam.service

# /etc/systemd/organization/encode_webcam.service  [Unit] Clarification=encode_webcam Afterwards=network.target rtsp_server.service rc-local.service  [Service] Restart=always RestartSec=20s User=pi ExecStart=ffmpeg -nostdin -hide_banner -loglevel error -input_format yuyv422   -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -c:v h264_omx -r 10 -b:5 2M   -vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf   :text='%{localtime}:x=eight:y=eight:fontcolor=white: box=1: boxcolor=black" -an   -f rtsp rtsp://localhost:80/live/stream   [Install] WantedBy=multi-user.target

At present start it up, and enable it to run on boot.

sudo systemctl start encode_webcam sudo systemctl enable encode_webcam

Connect it to your security eye

I take looked at a few different security suits for my personal needs. They included iSpy (Windows) and ZoneMinder (Linux) but I finally decided upon the manufacture standard Blueish Iris (Windows). I similar it because of feature set: mobile app, move detection, mobile alerts, NAS and Cloud storage, etc… Blue Iris also has a 15 day evaluation period to attempt before you purchase. You don't even need to register or provide credit info!

For our needs, the best part about Blueish Iris is that it supports direct to disk recording. That way we don't have to re-encode the stream! And then lets get rolling, on the height left, select the outset menu and hitting "Add together new camera".

add new camera

It will then have a popup to proper name and configure the camera, here make sure to select the terminal choice "Direct to deejay recording".

Side by side information technology volition need the network info for the camera, put in the same info as you did for VLC. Blueish Iris should auto parse it into the fields information technology wants, and hit OK.

Volia! Your raspberry pi is now added to your security suite!

Now you can have fun setting upwardly recording schedules, motion detection recording, mobile alerts, and more!

Source: https://codecalamity.com/raspberry-pi-hardware-accelerated-h264-webcam-security-camera/

Posted by: norrisrues1974.blogspot.com

0 Response to "How To Format The Rtsp Address For Ip Camera Sources"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel