Skip to content

New blog section added

Announcements
  • I’m excited to announce that a new blog section has been added 😛 The blog is actually using Ghost and not NodeBB, and also sits on it’s own subdomain of https://content.sudonix.com (if you ever fancy hitting it directly).

    We’ve moved all the blog articles out of the existing category here, and migrated them to the Ghost platform. However, you can still comment on these articles just like they were part of the root system. If you pick a blog article whilst logged in

    7e61c35b-2304-4c06-bda2-34da52252e1a-image.png

    Then choose the blog article you want to read

    7ca5089e-cf7e-4050-b951-5426fd1c6ec3-image.png

    Once opened, you’ll see a short synopsis of the article

    1bc086b4-5968-4e81-bc47-70de263b2275-image.png

    Click the link to read the rest of the post. Scroll down to the bottom, and you’ll see a space where you can provide your comments ! Take the time to read the articles, and provide your own feedback - I’d love to hear it.

    3f712e7c-475d-42d4-a5ca-b4becff6cc2e-image.png

    The blog component is not quite finished yet - it needs some polish, and there’s a few bugs scattered here and there, but these will only manifest themselves if a certain sequence of events is met.


  • 7 Votes
    3 Posts
    162 Views

    @DownPW Yes, this is the general idea. I’m also in the process of putting together a welcome message to be displayed when a new user registers so they can explore all of the services on offer.

  • 24 Votes
    49 Posts
    2k Views

    @crazycells You should be able to upvote more now as I’ve located the hard coded function in the plugin responsible for this and changed it from

    maxVotesPerUser(reputation) { let MIN = 5, MAX = 50; let calculatedVotesPerUser = Math.floor(reputation / 10); if (calculatedVotesPerUser < MIN) { calculatedVotesPerUser = MIN; } else if (calculatedVotesPerUser > MAX) { calculatedVotesPerUser = MAX; } return calculatedVotesPerUser; },

    to

    maxVotesPerUser(reputation) { let MIN = 5, MAX = 500; let calculatedVotesPerUser = Math.floor(reputation / 50); if (calculatedVotesPerUser < MIN) { calculatedVotesPerUser = MIN; } else if (calculatedVotesPerUser > MAX) { calculatedVotesPerUser = MAX; } return calculatedVotesPerUser; },

    It appears that the upvote limit is 10% of your reputation by default.

  • Nodebb as blogging platform

    General
    10
    5 Votes
    10 Posts
    521 Views

    @qwinter I’ve extensive experience with Ghost, so let me know if you need any help.

  • 2 Votes
    4 Posts
    401 Views

    @Hari I get that load speed with NodeBB alone 🤭

  • 5 Votes
    4 Posts
    323 Views

    Off course 😉

  • 0 Votes
    1 Posts
    219 Views

    Once in every while, you encounter a repetitive issue that no matter what you try to do to resolve it, the problem manifests itself over and over again - sometimes, even on a daily basis. Much of how the issue is remediated really depends on the person assigned to the task.

    You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly - one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame. Sometime around 2007, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster - none more so than I - obviously, I have the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “…it’s rebooting itself every day, so how will that help ?”

    The investigation

    Joking aside, we’ve all heard the “have you rebooted” question touted at some point during helpdesk discussions, but this one was different. A system rebooting itself is usually symptomatic of an underlying issue somewhere, and my team member was ready for the task ahead. Stepping up to the plate, he asked if it was ok to install some monitoring software on the server. Usually, installing additional software components in a production server without testing first is a non-starter, but seeing as we needed to get this resolved as quickly as possible to reinstate the nightly backup (which incidentally hasn’t run successfully for 3 days by now), I provided approval to proceed without question. There’s a leap of faith at this point, as you could cause more problems than that you actually set out to resolve in the first place, but, as with anything related to information technology, someone’s you have to accept an element of risk. The software itself was actually for the RAID controller and motherboard  The assigned technician had already decided it was related to something along the lines of a faulty RAM module, or perhaps an issue with the controller itself. My thoughts leaned elsewhere already at this point - is the server reboots itself at exactly the same time every day then there is an established pattern which should be investigated first. It’s a logical approach, but it’s a common trait for technical support staff to sometimes think outside of the box - or in this case, outside of the building. Not wanting to push my opinion, or trample on anyone’s toes, I decided to remain quiet and see just how far this would go before intervention was required.

    In this case, not very far. The following morning after another unannounced nightly reboot, the error “the previous shutdown at [insert time and date here] was unexpected” showed up in the event log. No real surprises there, and once again, exactly the same time as the previous night. I asked my technician for an update, and he informed me that he believed that the memory was faulty and somehow causing the server to blue screen and reboot. That was actually a reasonable response and so I commended him on his research and findings, but also reminded him to perform a manual backup so that we at least had something to revert to in the event of a failure. Later that afternoon, the same tech approached me and said that he had ordered some replacement memory, and wanted to arrange downtime to fit it. Trying to keep a poker face and remain passive, I agreed and the memory was replaced the same evening around 10pm. At 2am the following morning, kaboom ! - the server rebooted itself again. Not wanting to admit defeat, our courageous tech suggested that the problem could be due to the system overheating. Another fair point, but not realistic as you’d see this in event log as a thermal shutdown. I willingly entertained this, and allowed investigations into the CPU temperature to begin - after another manual backup. Unsurprisingly, the temperature data returned no smoking gun, so that was abandoned. The next port of call was to reapply the service pack. Now, I’ll admit that this used to fix a multitude of issues under Windows NT Server (particularly Service Pack 4) but not under Windows 2003. I declined this for obvious reasons - if you reapply the service pack, you run the risk of overwriting key DLL files that could (and often will) render Exchange inoperable. Not being prepared to introduce an unprecedented risk into what was already becoming something of a showcase, I suggested that we look elsewhere.

    The exasperation

    The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect. Hence the title of this article - Avoid the “bulldozer to find a china cup” scenario. After witnessing another pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless - the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    …an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    The realisation

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ?  Here’s the moral of this (particular, and many others like it) story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement - don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience - it’s earned, gained, and leaves an indelible mark

    Let’s hear your views. Did you ever come across a situation where no matter what you tried, nothing worked ? Did the solution turn out to be much simpler than you’d have ever thought ?

  • 5 Votes
    4 Posts
    476 Views

    @crazycells I guess the worst part for me was the trolling - made so much worse by the fact that the moderators allowed it to continue, insisting that the PeerLyst coming was seeing an example by allowing the community to “self moderate” - such a statement being completely ridiculous, and it wasn’t until someone else other than myself pointed out that all of this toxic activity could in fact be crawled by Google, that they decided to step in and start deleting posts.

    In fact, it reached a boiling point where the CEO herself had to step in and post an article stating their justification for “self moderation” which simply doesn’t work.

    The evidence here speaks for itself.

  • 0 Votes
    1 Posts
    186 Views

    When you look at your servers or surrounding networks, what exactly do you see ? A work of art, perhaps ? Sadly, this is anything but the picture painted for most networks when you begin to look under the hood. What I’m alluding to here is that beauty isn’t skin deep - in the sense that neat cabling resembling art from the Sistine Chapel, tidy racks, and huge comms rooms full of flashing lights looks appealing from the eye candy perspective and probably will impress clients, but in several cases, this is the ultimate wolf in sheep’s clothing. Sounds harsh ? Of course it does, but with good intentions and reasoning. There’s not a single person responsible for servers and networks on this planet who will willingly admit that whilst his or her network looks like a masterpiece to the untrained eye, it’s a complete disaster in terms of security underneath.

    In reality, it’s quite the opposite. Organisations often purchase bleeding edge infrastructure as a means of leveraging the clear technical advantages, enhanced security, and competitive edge it provides. However, under the impressive start of the art ambience and air conditioning often lies an unwanted beast. This mostly invisible beast lives on low-hanging fruit, will be tempted to help itself at any given opportunity, and is always hungry. For those becoming slightly bewildered at this point, there really isn’t an invisible beast lurking around your network that eats fruit. But, with a poorly secured infrastructure, there might as well be. The beast being referenced here is an uninvited intruder in your network. A bad actor, threat actor, bad guy, criminal…. call it what you want (just don’t use the word hacker) can find their way inside your network by leveraging the one thing that I’ve seen time and time again in literally every organisation I ever worked for throughout my career - the default username and password. I really can’t stress the importance of changing this on new (and existing) network equipment enough, and it doesn’t stop at this either.

    Changing the default username and password is about 10% of the puzzle when it comes to security and basic protection principles. Even the most complex credentials can be bypassed completely by a vulnerability (or in some cases, a backdoor) in ageing firmware on switches, firewalls, routers, storage arrays, and a wealth of others - including printers (which incidentally make an ideal watering hole thanks to the defaults of FTP, HTTP, SNMP, and Telnet, most (if not all of) are usually always on. As cheaper printers do not have screens like their more expensive copier counterparts (the estate is reduced to make the device smaller and cheaper), any potential criminal can hide here and not be detected - often for months at a time - arguably, they could live in a copier without you being aware also. A classic example of an unknown exploit into a system was the Juniper firewall backdoor that permitted full admin access using a specific password - regardless of the one set by the owner. Whilst the Juniper exploit didn’t exactly involve a default username and password as such (although this particular exploit was hard-coded into the firmware, meaning that any “user” with the right coded password and SSH access remotely would achieve full control over the device), it did leverage the specific vulnerability in the fact that poorly configured devices could have SSH configured as accessible to 0.0.0.0/0 (essentially, the entire planet) rather than a trusted set of IP addresses - typically from an approved management network.

    We all need to get out of the mindset of taking something out of a box, plugging it into our network, and then doing nothing else - such as changing the default username and password (ideally disabling it completely and replacing it with a unique ID) or turning off access protocols that we do not want or need. The real issue here is that today’s technology standards make it simple for literally anyone to purchase something and set it up within a few minutes without considering that a simple port scan of a subnet can reveal a wealth of information to an attacker - several of these tools are equipped with a default username and password dictionary that is leveraged against the device in question if it responds to a request. Changing the default configuration instead of leaving it to chance can dramatically reduce the attack landscape in your network. Failure to do so changes “plug and play” to “ripe for picking”, and its those devices that perform seemingly “minor” functions in your network that are the easiest to exploit - and leverage in order to gain access to neighbouring ancillary services. Whilst not an immediate gateway into another device, the compromised system can easily give an attacker a good overview of what else is on the same subnet, for example.

    So how did we arrive at the low hanging fruit paradigm ?

    It’s simple enough if you consider the way that fruit can weigh down the branch to the point where it is low enough to be picked easily. A poorly secured network contains many vulnerabilities that can be leveraged and exploited very easily without the need for much effort on the part of an attacker. It’s almost like a horse grazing in a field next to an orchard where the apples hang over the fence. It’s easily picked, often overlooked, and gone in seconds. When this term is used in information security, a common parallel is the path of least resistance. For example, a pickpocket can acquire your wallet without you even being aware, and this requires a high degree of skill in order to evade detection yet still achieve the primary objective. On the other hand, someone strolling down the street with an expensive camera hanging over their shoulder is a classic example of the low hanging fruit synopsis in the sense that this theft is based on an opportunity that wouldn’t require much effort - yet with a high yield. Here’s an example of how that very scenario could well play out.

    Now, as much as we’d all like to handle cyber crime in this way, we can’t. It’s illegal 🙂

    What isn’t illegal is prevention. 80% of security is based on best practice. Admittedly, there is a fair argument as to what exactly is classed as “best” these days, although it’s a relatively well known fact that patching the Windows operating system for example is one of the best ways to stamp out a vulnerability - but only for that system that it is designed to protect against. Windows is just the tip of the iceberg when it comes to vulnerabilities - it’s not just operating systems that suffer, but applications, too. You could take a Windows based IIS server, harden it in terms of permitted protocols and services, plus install all of the available patches - yet have an outdated version of WordPress running (see here for some tips on how to reduce that threat), or often even worse, outdated plugins that allow remote code execution. The low hanging fruit problem becomes even more obvious when you consider breaches such as the well-publicised Mossack Fonseca (“Panama Papers”). What became clear after an investigation is that the attackers in this case leveraged vulnerabilities in the firm’s WordPress and Joomla public facing installations - this in fact led to them being able to exploit an equally vulnerable mail server by brute-forcing it.

    So what should you do ? The answer is simple. It’s harvest time.

    If there is no low-hanging fruit to pick, life becomes that much more difficult for any attacker looking for a quick “win”. Unless determined, it’s unlikely that your average attacker is going to spend a significant amount of time on a target (unless it’s Fort Knox - then you’ve have to question the sophistication) then walk away empty handed with nothing to show for the effort. To this end, below are my top recommendations. They are not new, non-exhaustive, and certainly not rocket science - yet they are surprisingly missing from the “security 101” perspective in several organisations.

    Change the default username and password on ALL infrastructure. It doesn’t matter if it’s not publicly accessible - this is irrelevant when you consider the level of threats that have their origins from the inside. If you do have to keep the default username (in other words, it can’t be disabled), set the lowest possible access permissions, and configure a strong password. Close all windows - in this case, lock down protocols and ports that are not essential - and if you really do need them open, ensure that they are restricted Deploy MFA (or at least 2FA) to all public facing systems and those that contain sensitive or personally identifiable information Deploy adequate monitoring and logging techniques, using a sane level of retention. Without any way of forensic examination, any bad actor can be in and out of your network well before you even realise a breach may have taken place. The only real difference is that without decent logging, you have no way of confirming or even worse, quantifying your suspicion. This can spell disaster in regulated industries. Don’t shoot yourself in the foot. Ensure all Windows servers and PC’s are up to date with the latest patches. The same applies to Linux and MAC systems - despite the hype, they are vulnerable to an extent (but not in the same way as Windows), although attacks are notoriously more difficult to deploy and would need to be in the form of a rootkit to work properly Do not let routers, firewalls, switches, etc “slip” in terms of firmware updates. Keep yourself and your team regularly informed and updated around the latest vulnerabilities, assess their impact, and most importantly, plan an update review. Not upgrading firmware on critical infrastructure can have a dramatic effect on your overall security. Lock down USB ports, CD/DVD drives, and do not permit access to file sharing, social media, or web based email. This has been an industry standard for years, but you’d be surprised at just how many organisations still have these open and yet, do not consider this a risk. Reduce the attack vector by segmenting your network using VLANS. For example, the sales VLAN does not need to (and shouldn’t need to) connect directly to accounting etc. In this case, a ransomware or malware outbreak in sales would not traverse to other VLANS, therefore, restricting the spread. A flat network is simple to manage, but a level playing field for an attacker to compromise if all the assets are in the same space. Don’t use an account with admin rights to perform your daily duties. There’s no prizes for guessing the level of potential damage this can cause if your account is compromised, or you land up with malware on your PC Educate and phish your users on a continual basis. They are the gateway directly into your network, and bypassing them is surprisingly easy. You only have to look at the success of phishing campaigns to realise that they are (and always will be) the weakest link in your network. Devise a consistent security and risk review framework. Conducting periodic security reviews is always a good move, and you’d be surprised at just what is lurking around on your network without your knowledge. There needn’t be a huge budget for this. There are a number of open source projects and platforms that make this process simple in terms of identification, but you’ll still need to complete the “grunt” work in terms of remediation. I am currently authoring a framework that will be open source, and will share this with the community once it is completed. Conduct regular governance and due diligence on vendors - particularly those that handle information considered sensitive (think GDPR). If their network is breached, any information they hold around your network and associated users is also at risk. Identify weak or potential risk areas within your network. Engage with business leaders and management to raise awareness around best practice, and information security. Perform breach simulation, and engage senior management in this exercise. As they are the fundamental core of the business function, they also need to understand the risk, and more importantly, the decisions and communication that is inevitable post breach.

    There is no silver bullet when it comes to protecting your network, information, and reputation. However, the list above will form the basis of a solid framework.

    Let’s not be complacent - let’s be compliant.