Microsoft Azure Sentinel Log Analytics- Not Collecting Syslog

Today I was asked to advise on why a particular firewall was unable to send its syslog data to Azure Sentinel and found something rather interesting I thought would be useful to share.

Firstly I validated that the deployment steps had been followed and that the VM Extension for Linux had been attached to the expected virtual machine (CentOS), all checked out fine.

Originally the concern was that the firewall wasn’t sending the syslog data, but a quick packet capture on the WAN interfaces dismissed these concerns. For reference, we were using UDP 514 as standard.

Great, we know our traffic is leaving fine, as we can’t review our ISP’s network, our next troubleshooting step is to make sure it’s being received by the Azure virtual machine at the other end.

To do this, next we looked at the Network Security Group (NSG) defined on the Linux Log Analytics virtual machine to determine if the traffic was being blocked on the Azure side. I validated the public IP address and port our traffic was originating from matched the NSG rule and it should be being passed through absolutely fine.

Now we know there was no external reason for the traffic to be blocked, I moved onto the virtual machine itself. I checked whether the virtual machine itself had any firewall enabled, and it didn’t.

At this point I decided to query the virtual machine to confirm if it was listening for the traffic, and I found something unusual. To do so I ran the following command

sudo netstat -tunlp

The output was surprising, whilst I saw the rsyslogd process was listening, it wasn’t listening on port 514 as expected, instead it was listening on a port high in the 30,000 range. A common reason for this to happen is if the intended port is already bound, some applications will set a dynamic port to continue to function. However nothing was using the 514 port. I validated this wasn’t a deployment error by removing the Log Analytics Agent, rebooting and then connecting the Log Analytics Agent again, the issue persisted and the port was changing across any subsequent reboots.

At this point I reviewed the rsyslog.conf file (located at /etc/rsyslog.conf and found the problem. The following lines:

#module(load="imudp") # needs to be done just one
#input(type="imudp" port=514)

These lines had leading # symbols, meaning they were commented lines and not being read. I uncommented them, and rebooted the server (I prefer to do this so you know this solution should work for any subsequent reboots, you could just restart the service).

When the server came back online, it was listening on port 514 and all the syslogs were being delivered into Azure Sentinel as expected!

By micoolpaul

Technical Consultant at Nexus Open Systems. Focusing on Veeam, VMware & Microsoft Productivity and Infrastructure stacks.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s