
How do you handle network issues? If you’re like most small businesses, you wait until something breaks or goes wrong before getting an IT services company on the phone. At a glance, it makes sense. Why pay to fix something if it isn’t broken?
Sadly, this way of thinking can do more harm than good, and it has taken many businesses out of commission.
When you get right down to it, there are two primary ways to handle network security:
One of these costs significantly more than the other and can destroy a business. You can probably guess which one we’re talking about.
When you’re reactive with your IT services, which includes data security, it means something bad has already happened.
There are many different things that can harm your data and your business, like an employee accidentally downloading malware onto their computer, you getting hit by a data breach, or a power surge occurring late in the night after a thunderstorm hits.
However, being reactive basically opens the door to these threats. It’s the one mistake that can put you out of business for good.
Hackers, for example, are a HUGE threat to small businesses. These cybercriminals will stop at nothing to break into your network to steal whatever they can get their hands on or do whatever damage they can. These people don’t care if their actions put you out of business.
This is why you cannot rely on a reactive approach to your IT services. When you do, you’re a step behind hackers, malware, and even natural disasters and equipment failures.
In the past, IT services were very reactive. They were built on the break-fix model, which is exactly as it sounds. A business would wait for something to break or go wrong before calling an IT services company for help to fix it.
In the 1990s and even into the 2000s, the break-fix model had its place and it worked. But as technology improved and it became easier for even the smallest businesses to stay ahead of the curve, the break-fix model stopped making sense.
The number of external threats has increased dramatically over the last 10 years. There are countless malware programs floating around on the Internet, and hackers are working 24/7 to wreak havoc.
Today, IT services companies can predict threats. They can stop attacks in their tracks and protect your business and your data. This is called managed services — and it could save your business.
When you work with a managed services provider, you can state exactly how you want to be proactive.
Do you want your network monitored for threats 24/7? Do you want them to have remote access to your networked devices so they can provide instant support to you and your team? They can do all of that!
A good IT services company can help you make sure all your data is backed up and secure. They can make sure external threats are spotted before they become a problem. They can make sure phishing e-mails don’t expose you to harm. They can even help you keep an awareness training program for your employees. The list goes on!
If you’re already working with an IT services company and they’re only providing outdated break-fix support, it’s time to say, “Enough!” Because, as many businesses have learned, waiting to make that call can be devastating!
Want more information about complete managed IT services for your business? Know more here
Also, take a look at our managed IT services plans: From 24/7 assistance, technology vendor management and hardware/software set up, to a complete Cybersecurity framework that includes awareness training for your employees.
Technology has changed a lot over the years. Back in the mainframe days we had very standard architectures that were driven by a few vendors and managed by a few people. Developers had few choices when it came to infrastructure and programming languages. When we moved to the client server era, infrastructure configurations became much more dynamic and many enterprises adopted a three tier architecture. Developers could now choose from a variety of programming languages and specialists emerged within each layer of the architecture (web, application, database, middleware, etc.). This increase in complexity came with trade-offs. We now had greater flexibility in the types of applications we could build and the ability to deliver software with more velocity and at a lower cost. However, managing these n-tier architectures created more operational overhead and required a diverse set of skills to support the various layers.
As we enter the cloud era, complexity is at an all-time high and our trade-offs are more extreme than ever before. We can build incredible solutions at amazing scale. Both the software and infrastructure components of our architectures can be highly automated. We live in the world where everything can be delivered as a service and is available online, all the time. In order to support the “always on” and auto scaling requirements that users have come to expect, the underlying architectures have become extremely complex.
At the same time our architectures are increasing in complexity, the business is demanding speed to market like never before. Large ecosystems around cloud computing, mobile computing, big data solutions and the Internet of Things allow us to connect highly abstracted building blocks together and build highly available and extremely robust solutions in a fraction of the time. The architectures of cloud, mobile, and big data are highly distributed and the underlying infrastructure is both virtual and immutable. That is a radical change from when previously large, inflexible, physical architectures supported the software we built.
Today’s distributed architectures are made up of many moving parts. These architectures are elastic, meaning that they scale up and down horizontally by adding and subtracting virtual resources automatically. Building architectures of this nature is much more involved than in the vertical scaling world of mainframes and client server architectures where scaling meant adding bigger physical machines and components. Many engineers within today’s enterprises have years of experience dealing with vertical architectures, but very little experience in building horizontal architectures. Enterprises are traditionally very good at managing back office applications and building n-tier software. But when it comes to architectures that require high scalability and massively parallel processing, very few enterprises have the experience required to build those types of applications.
This creates the following dilemma. The business sees an opportunity, whether it is a new revenue stream, a competitive advantage, or possibly a competitive threat, and requests that IT implements a new cloud, mobile, or big data solution. IT has very little expertise in this space yet still decides to build it themselves. They go through a long period of prototyping and learning. Many of these enterprises will fail to deliver or will deliver something subpar or very late. While IT is trying to wrap their arms around these new technologies, the business opportunity sits there idle and the opportunity costs start accumulating rapidly over time. To make matters worse, IT is spinning their wheels and consuming valuable time and money just to stand up clouds, mobile platforms, or big data databases before they can even begin to focus on building the applications and services that will provide the greatest value to the business. Much of the work that IT is trying to figure out is already a commodity that a whole host of vendors already provide out of the box as a service, or even as a completely managed service.
IT traditionally wants to be in control of everything. Developers frequently want to build many things that are not a core competency. The problem I see in IT is they prioritize things like control and manageability far more than things like speed to market, customer satisfaction, agility, etc. We live in an era where time to market is one of the most critical value propositions in business. Get something to market quick, acquire customers, learn from those customers, and advance the product or service based on customer feedback. Going dark for 12-18 months as IT ramps up its skillset for the new technologies is not a winning formula. IT needs to understand that they should focus on delivering business value instead of trying to control every technology under the sun.
The real value of IT is its deep understanding of the business and enabling business partners by providing them with cost effective technology solutions that can be delivered quickly and adjusted frequently. Building and managing commodity technologies adds time and costs to each project and ties up precious IT resources on tasks that add little to no value to the business. No wonder IT is often considered a cost center! My advice is to take a step back and build a business architecture diagram that lays out the core business services that your company offers. Then draw up a reference architecture that depicts all of the different IT services that are required to support those business services. Then, identify which of those IT services are core to your business and build those. Everything else should be outsourced to managed service providers or to various external services and products.
IT needs to stop being a control freak and figure out how to add value by quickly delivering on the company’s core services. IT should be focusing on being the best provider of the services that their customers demand. If that service happens to be providing data center services, then by all means focus on that. If not, don’t spend time building datacenters. The same goes for big data, mobile, and IoT. If the company’s core service is to be a provider of database services, mobile platforms, or sensor technology, then build those technologies. If not, find the vendors who deliver those services as a core competency and spend your time building your business services on top of it.
EHRs must do much more than simply replace paper charts. By enhancing them with personal health record features, we can create a new tool that deepens patient engagement.
Electronic health record (EHR) systems don't yet satisfy most of the people they're meant to benefit. In fact, all too often they annoy and antagonize many users, including both healthcare professionals and patients.
But recent innovations in cloud computing and big data could soon make investing in EHRs worthwhile and deliver real benefits for clinicians and patients.
Records systems: The who and the what
First let's clarify what we mean by electronic records. The US government draws a distinction among three major types: electronic medical records (EMRs), electronic health records (EHRs), and personal health records (PHRs).
EMRs are electronic versions of classic paper charts and are meant to be used by clinicians for diagnostic purposes. EHRs have a broader mission: They are intended to follow patients throughout their journey, allowing all clinicians involved in their care to access their information. PHRs contain the same information as EHRs but are managed by patients themselves, and, critically (pun intended), they can include information from sources other than clinicians, such as patients themselves, home monitoring devices, and wearables.
As recently as 2008, fewer than 10% of hospitals had even a basic electronic record system in use. As of 2013, 58.6% of hospitals had EMRs in place, but only 3.1% of hospitals meet stage 2 meaningful use criteria (Cerner HIT Trends, August 2014).
Additionally, although patients are legally entitled to access their own records whenever they want, only 10% of EHR systems currently allow them to do so without going through a clinician. While EHR systems in place may meet the Affordable Care Act's first-level criteria for meaningful use, they are often far from being the readily accessible and easily transferable systems they should be.
How can IT innovation alleviate these systemic issues?
Addressing the challenges involved in EHR systems will take time, money, and effort. However, looking ahead and investing in a federated cloud architecture -- solutions that deliver results by combining the capabilities of multiple external and internal cloud services -- for electronic health systems will give patient data greater accessibility and a greater degree of mobility while maintaining privacy and security.
I also believe that PHRs in particular need to be taken a step further. Although early devices have drawbacks, we are seeing rapid innovation in and adoption of wearables. Apple, Google, and Samsung all announced platforms this year, and Apple just released its Apple Watch to favorable reception. Patients should be able to sync data from apps and devices that track fitness and biometrics with their PHRs and securely access that data on any mobile device.
Improving technology
Independent healthcare providers such as social service workers, speech pathologists, and behavioral professionals frequently use small-scale, HIPAA-compliant cloud services to organize records. These services include some of the usual suspects, such as Google Drive and Amazon AWS,
View the original article here

Mobile app developers want government healthcare agencies to make HIPAA regulations more flexible and current to meet consumer, technology, and provider needs.
In a letter sent Monday to Representative Tom Marino (R-PA), ACT, the association for application developers, in conjunction with AirStrip, Aptible, AngelMD, CareSync, and Ideomed, asked Department of Health and Human Services to "take a fresh look" at the Health Insurance Portability and Accountability Act (HIPAA) to ensure the regulation fits today's world, consumer requirements, and technological offerings.
"This is not pontification. This is about proactive changes to the guidance. That's why it is so tactical and so specific. We've all seen those letters that are broad and beautiful and ultimately unsuccessful. We need change and we need it now," said Morgan Reed, ACT's executive director, in an interview. "We are actively working with other members of Congress on both sides of the aisle to get to the expected outcome. I fully expect a bipartisan effort to move this forward to affect HIPAA."
Too often, providers and consumers are dissatisfied with the user experience they encounter with electronic health records (EHRs), he said. Thirty percent of hospital executives are dissatisfied with their EHRs, a recent Premier study found. Consumers are concerned about privacy and security, surveys show. Although 83% of 3,687 people polled this spring expect hospitals to use EHRs, only 53% trusted their information was safe, according to The Morning Consult. Those who distrust EHR security were five times more likely to withhold information from their providers, an Office of the National Coordinator for Health IT (ONC) study found earlier this year.
Rep. Marino told InformationWeek:
We are seeing a boom in innovation and technological advances in the healthcare space, but unfortunately our regulatory environment has not kept pace with this progress, and is now hindering growth and leaving job creation hanging in the balance. I would like to see the Department of Health and Human Services, as well as other governmental departments that enforce and regulate the implementation of Health Insurance Portability and Accountability Act standards, revamp the way in which they provide information and interact with the public, including large and small healthcare companies. A company should not be forced to staff up with a dozen lawyers simply to ensure they are in compliance with the law. Rather, the burden should be on a transparent and responsive government to provide clarity and guidance, so companies can focus on growing their businesses and providing better and more innovative products and services to the public.
To improve communication between providers and consumers and simplify the process for developers to enter the healthcare market, ACT and other letter signatories made the following requests:
Make existing regulation more accessible to technology companies.
A dearth of user-friendly resources makes entering healthcare a challenge for technology companies. Without assistance from expensive third-party consultants or the ability to understand "inside the Beltway" tools such as the Federal Register, startups and smaller developers in Silicon Valley and other high-tech regions operate at a disadvantage, said Reed. Like other agencies that work with software companies, the ONC should give developers the information they need to write mobile health apps on a website that features directories, appendices, technical documentation, and searchable databases, as well as updated FAQs, so app developers can learn from others' examples.
Improve and update guidance on acceptable implementations.
The remote use documentation on HHS's website pre-dates Apple's iPhone rollout. Last updated in December 2006, it does not include information on any new Apple iOS or Android phones or tablets, making it challenging for developers that want to ensure their apps meet HIPAA regulations. ACT recommends that the Office of Civil Rights (OCR) provide implementation standards or examples of standard implementations that would not begin an audit. For example, the group requests clarity regarding cloud and compliance: Currently, it is unclear what is needed when encrypted data is stored in the cloud and the cloud provider has no access to the encryption key.
Enhance outreach to new players in the vertical.
Rather than focus primarily on existing healthcare organizations, HHS and its agencies should expand their reach and presence to non-traditional players that want to enter this vertical. It should encourage existing mobile app developers to consider healthcare as an option, in part by participating in events far beyond Washington, ACT said.
Without changes, healthcare app developers must limit improvements to their software, Reed told us.
"We see many thousands who've foregone improvements on their products because they see a regulatory morass around HIPAA that they don't understand."
Although there are currently about 35,000 health and fitness apps on the market, the number, quality, and usefulness would increase if HIPAA were more understandable and less complex, Reed added.
View the original article here
Videos offer a peek at Microsoft's Windows Threshold in action, including Start menu features that could appease keyboard-and-mouse fans who hate Windows 8.
Earlier this week, alleged screenshots of the next version of Windows, codenamed Threshold, leaked online. The images provided the first in-depth look at Microsoft's upcoming OS, including support for virtual desktops, the inclusion of a notification center, and the new Start menu.
Two videos have since appeared that purportedly show the Start menu in action. Both were posted by German blog WinFuture, which also, along with another German website, posted the screenshots. Microsoft's decision to remove the Start menu from Windows 8 alienated some traditional PC users, many of whom dismissed the new user interface as too touch oriented. The new videos give these disgruntled mouse-and-keyboard users additional reassurance that Microsoft's PC interface is headed in the right direction.
If you hate Windows 8's tiled Start screen, the video suggests Microsoft has heard you, as Threshold will allow users to completely purge Live Tiles and the Modern-style apps they represent. Even so, the new apps are a central part of Microsoft's strategy; they allow developers to write software that will run on any Windows platform, from the Xbox to smartphones, and could eventually offer users a seamless digital experience as they move from device to device. For that reason, Threshold appears to include several ways for users to interact with Modern apps, even if those users spend most of their time in the desktop UI.
In April, Microsoft previewed an early version of the new Start menu, which at the time included a Windows 7-like list of common destinations and apps in a column on the left, and a collection of Live Tiles in a column on the right. The screenshots and videos are consistent with this look. They also support recent rumors that claim the new Start menu won't replace Windows 8's Start screen as much as absorb it.
In the video, the Start menu's left column includes links to File Explorer, the Documents folder, and other frequent destinations that should be familiar to Windows 7 users. But the left column evidently also can be switched to an "all apps" list. The right column, meanwhile, includes any Live Tiles the user has pinned there. In the Threshold system settings, this right column is still referred to as a Start screen, but unlike the version in Windows 8, it takes up space only within the Start menu, rather than needing its own screen.
Users can open, pin, unpin, and uninstall apps directly from within the Start menu. The upshot is that if you don't use any Modern apps, you can simply refrain from pinning any to the Start menu. You can also uninstall any that might happen to come pre-pinned when you upgrade or buy a new machine. That way, if you don't want to see any tiles, you won't.
If you choose to pin tiles to the Start menu, they can be resized and moved around, just like Windows 8.1 lets you do on its Start screen. Some of them also appear to be truly "live" in that they display a constant stream of updates, whereas others are just regular icons. This is also consistent with the way Live Tiles work in Windows 8 and 8.1.
In the videos, when Modern apps are launched from the Start menu, they open in floating windows on the desktop, just like legacy applications. These windows can be moved around, resized, and layered on top of one another, whereas current Modern apps in Windows 8 and 8.1 are viewable only in full-screen mode.
Based on the new leaks, Threshold will still contain a Windows 8-style Start screen, but it will be disabled by default. If enabled, the OS will boot directly to the Start screen. This resembles the UI customization options available in Windows 8.1, which chooses its default settings based on the type of hardware on which it's running, and then lets users make changes. They can choose, for example, to boot PCs directly to the desktop, or the Modern Start screen. Even though Threshold includes a number of features aimed at desktop users, it makes sense to include both UIs, since the OS will also run on not only conventional PCs, but also on two-in-one devices and touchscreen PCs. But Threshold looks to give users even more control over how and when the two UIs interact.
Microsoft reps haven't commented on the leaks, but even if they're legitimate, a lot could change between now and Threshold's official release, expected in spring 2015. In fact, according to some rumors, change is at the forefront of Microsoft's goals. The company is expected in the next month to release a "Public Enterprise Technical Preview" of Threshold that will allow users to provide one-click feedback. According to ZDnet's Mary Jo Foley, who has a good record for pre-release Microsoft info, different users might be given different versions of Threshold depending on what kind of feedback they provide. For Microsoft, it seems the idea is to gather a lot of real-world data about what works and what doesn't, and implement necessary changes before shipping the final product.
Later this year, Microsoft is expected to release a second, consumer-focused preview for tablet and smartphone users. At this point, it's still not clear what Microsoft intends to call Threshold when it comes to market. Some indications, including a social media post from official Microsoft account, indicate the next version will be called Windows 9, perhaps to distance the new release from Windows 8's poor reputation. But other reports say Microsoft might drop version numbers and just refer to all its operating systems as "Windows." Microsoft CEO Satya Nadella has made broad allusions to such a strategy, and the new videos and screenshots contain references to "Windows" but not to "Windows 9."
View the original article here
Nearly 70 percent of companies are concerned about employees using third-party messaging and chat apps to communicate and send documents internally, according to a survey of 397 IT enterprise decision-makers by messaging and mobility specialist Infinite Convergence.
The study also found that 59 percent of IT decision-makers think third-party messaging apps and chat tools are insecure for enterprise communication, and 41 percent of companies ban the use of one or more third-party chat apps. Additionally, while 84 percent indicate that internal enterprise messaging systems are a more secure option, less than half currently use an internal messaging service.
When it comes to bring your own device (BYOD) policies, 41 percent of survey respondents indicated that more than half of their employees use personal devices for internal messaging and to access company information. "With BYOD, enterprises have to deal with a whole new realm of IT concerns that they did not previously face: the consumer's own device and the information exchanged on it," Anurag Lal, CEO at Infinite Convergence, told eWeek. "Enterprise IT teams didn’t have to contemplate that before BYOD was an option.
Not only do they have to deal with consumers' devices, but also the applications on their devices and the ability of those applications to transfer enterprise information or content." Lal explained that with the advent of BYOD and over-the-top applications, enterprises are finding it more difficult than ever to control employee messaging.
According to the survey, two out of three executives said they are concerned about employees using their personal devices to communicate and send business-related documents and information, and more than half are concerned about former employees still having access to company information on their personal devices after they leave.
The study found that at least a quarter of companies ban some of the most popular apps and chat tools for internal communication, including Google Chat (30 percent), WhatsApp (29 percent), weChat (27 percent), Skype (26 percent) and iMessage (26 percent).
Email is considered the most secure way to communicate enterprise information, according to the IT executives surveyed, with 89 percent considering it a secure medium. The study also found that only a third of companies mandate that internal communications outside of email go through a corporate-controlled messaging system.
More than three quarters (77 percent) of enterprise IT executives indicate a highly secure, simple and intuitive internal messaging service would be valuable compared to their current enterprise communication system. Of the respondents who said they currently have a regulated internal messaging system in place for employees, more than half said they cannot remotely wipe information sent through that system from an employee’s device. "Enterprises need to train their end users to exercise a level of responsibility when transferring enterprise information.
Employees need to be aware that their consumer apps are not secure and only use enterprise approved or mandated apps for internal communications," Lal said. "In today’s environment, where enterprise information is breached constantly, this is the only way companies can have control over how information is exchanged within the enterprise."
View the original article here
As the rate of stolen mobile devices has increased, the average time for IT departments to respond to this security threat has also grown, according to a Kaspersky Lab survey of global IT security professionals.
The report found that more than one-third of employees (38 percent) take up to two days to notify their employers of stolen mobile devices, and 9 percent of employees wait three to five days. The percentage of employees who notified their employers the same day the incident occurred decreased from 60 percent to 50 percent from 2013 to 2014.
The cause of this delay is employees becoming slower to notify their employers of missing devices, with only half of employees reporting theft quickly. "I suspect there is some embarrassment and or fear of reporting a lost, or perhaps stolen device," Mark Bermingham, director of global B2B product marketing for Kaspersky Lab, told eWeek. "Employees will often spend time, which ends up being critical time searching for and hoping to recover the device before giving up and reporting to your organization."
Across businesses that experienced mobile device theft, 19 percent said the device theft resulted in the loss of business data, meaning businesses have approximately a one-in-five chance of losing data if a corporate mobile device is stolen.
The survey also found that the rate of mobile device theft overall has continued to climb over the years, with 25 percent of companies experiencing the theft of a mobile device in 2014, a significant increase from the 14 percent reported in 2011. However, as stolen devices become more common, employees appear to be responding more slowly, with only half of employees in 2014 reported a stolen device on the same day the incident occurred.
The growing prevalence of stolen mobile devices may be a contributing factor to employee apathy, since a stolen smartphone might now be seen as a somewhat common occurrence, and not a rare crisis that demands attention. "I’d hope the trend would improve, but to accomplish this more training and expectation setting needs to occur between organizations and employees when dispensing and or activating BYOD mobiles," Bermingham said. "Some of this training needs to focus on the importance of speed in reporting a misplaced device, which may actually be lost or stolen." He noted that often, with the right administrative tools in place, like remote lock and find and misplaced device can more easily be retrieved. "
Additionally, in the event of loss remote wipe becomes critical and in this case the sooner the better," he said. "Enforcing policies like required passwords can also help to bolster security for events where devices are lost or stolen by making it more difficult for data and or sensitive business information to be extracted from these devices."
When looking at behaviors of employees in specific regions, North American employees are the slowest to respond based on 2014 survey data, with only 43 percent of North American employees reporting a stolen device on the same day as the incident.
The Asia-Pacific region saw the biggest change year-over-year with only 47 percent of employees reporting same-day notification in 2014, a drop from 74 percent in 2013.
However, the rate of mobile device theft varied significantly across regions. The Middle East reported the lowest rate of mobile device theft by far, with 8 percent of businesses reporting an incident, followed by 15 percent in Japan and Russia.
View the original article here
Web servers based on both Linux and Windows are rapidly being targeted by attackers and turned into server-side botnets capable of high-bandwidth denial-of-service attacks, two security firms stated in recently published analyses.
On one hand, attackers are targeting unpatched or poorly-maintained Linux systems, exploiting known vulnerabilities and installing bot software to conscript the computers into a server-side botnet, according to an advisory released on Sept. 4 by Prolexic, a subsidiary of content-delivery provider Akamai. Yet, Windows servers are not immune.
A recent attack against a client of Website security firm Sucuri used 2,000 servers to send a flood of packets to the victim's network. Web servers running on Windows 7 and 8 accounted for almost two-thirds of those systems, the company stated in an advisory. In the past, Sucuri had usually seen traffic from botnets created by consumer desktop and laptop systems, CEO and co-founder Tony Perez told eWEEK. "This was different because of the anatomy of the network," he said. "Normally, we see attacks coming from notebooks and desktops and PCs, but now Web servers are doing the denial-of-service." By using Web servers, "the attackers have more horse power available to them, allowing them to have more devastating effect on unsuspecting web sites," Perez said.
Server-side botnets used for denial-of-service attacks first came to light in 2012, when the Izz ad-Din al-Qassam Cyber Fighters targeted financial institutions with massive bandwidth and application-layer attacks in alleged retaliation for the posting of videos to YouTube that were offensive to some Muslims.
Rather than using botnets consisting of tens of thousands of consumer desktop systems, the attackers used hundreds to thousands of Web servers instead. While some attackers use vulnerabilities to compromise servers, others have significant success just by trying common passwords. The 2,000 servers that attacked Sucuri's client sent some 5,000 HTTP requests per second, enough to not just overwhelm the victim's Web server but the victim's hosting provider as well.
The hosting provider, which Perez declined to name, cut off the company for violating its terms of service, according to Perez. The campaign to create Linux-based DDoS botnets is more extensive, according to Prolexic.
The attackers behind the denial-of-service botnet use vulnerabilities in popular Linux software, such as Apache Tomcat, Struts and Elasticsearch, the company said. Once a server is compromised, the attackers upload malware, which creates a copy of itself named .IptabLes or .IptabLex. IPTables is a common firewall and routing package included in most versions of the Linux operating system. "The analysis conducted within the lab environment showed that the binary exhibits DDoS functionality," Prolexic stated in its alert. "Two functions found inside the binary indicate SYN and DNS flood attack payloads.
These DDoS attack payloads are initiated once an attacker sends the command to an infected victim machine." The botnet created by the campaign has been used to target financial institutions, and in one case, created a DDoS that peaked at 119 Gbps. "This bot seems to be in an early development stage and shows several signs of instability. More refined and stable versions could emerge in future attack campaigns." The attacks appear to come from Internet addresses in Asia, and two hard-coded addresses contained in the malware binary are in China, according to Prolexic.
View the original article here
Angela Gunn, an old friend who is working as a security researcher these days was explaining some of the problems with traditional approaches to thinking about security. But she is focused on Hewlett-Packard's latest push, which is to think like a bad guy trying to break into computer networks and databases.
We were both attending the HP Protect conference here, which spotlighted the "Think Like a Bad Guy" theme pretty much everywhere you looked. "Really," she said, "you have to think like whoever the bad guy is working for." My friend had a point. While it's important to understand the cyber-criminal's approach when they're attacking you, the only way to really understand them is to understand their motivations. What is it they're looking for when they break into your network?
I found out when I entered the conference display floor and wandered to the back of the room to the Bad Guys Lair. This required a walk through a smoke-filled corridor crisscrossed with laser beams to reach a bunch of people sitting around among pizza boxes, soda cans and bags of empty calories. These were the HP "Bad Guys." I later found out that I could have gone around to the rear entrance and avoided the drama. Leave it to security guys and corporate hackers to engineer in an analog back door.
What I found there laid out clearly was what HP means about thinking like a bad guy. On a wall-sized screen were employment ads for low-level hackers to run a local mission, provide expertise in specific areas of some operating systems, or perhaps infiltrate an office and drop off a malware laden USB memory stick. On another screen there was page after page of ads for commercial software, but these packages were commercial malware designed to sniff out credit cards or passwords. These applications were sold and licensed just like software from big-name software companies.
Want an app to read credit card numbers from Firefox? That'll be $500. Want one that does Firefox and Internet Explorer? You can upgrade for an additional $500. One of the security researchers, who we'll call "Sam" (they don't want their real names used in public) explained that one of his colleagues maintained between eight and 10 identities on those hacker websites so they can keep up with what's current. Then he showed me where you can buy credit card numbers. Unfortunately, this illustrated just how easy it is to obtain those numbers, and how easy it is to create counterfeit credit cards.
He took a blank plastic card with a magnetic stripe, ran it through a device and created a blank credit card in less than five seconds. So I asked him how useful such a card would be. What he told me is enough to immediately stop using any card without an EMV chip.
View the original article here
The owners of electronic health records aren't necessarily the patients. How much control should patients have? Electronic medical records contain highly personal information, from illnesses to family matters to emotional statuses. Yet those records don't necessarily belong to the patient. The question this raises in the digital age is: Just how much control should people have over their own records?
Electronic health records (EHRs) have become invaluable collections of information used by a diverse group ranging from government agencies and disease researchers to marketing firms and for-profit data brokers. Government and for-profit businesses have long collected, parsed, and used collective patient data to track the path of chronic conditions and contagious diseases, follow the success rates of new and old treatments, develop new cures, and improve the quality of providers' services. But because today's electronic records are easily shareable -- and hackable -- and have different rules depending on state and organization, some patients fear they have little to no control over the information that tracks their very personal health information.
"It's like we have a vacation home, and we've given out keys to 50 different people, and they all show up at the same time," says Chris Zannetos, CEO and founder of security developer Courion, which counts healthcare organizations as about one-third of its customers. In other words, as patients we want our data to be shared when needed, but then we're surprised at how quickly we lose control of how it's shared.
Consumers don't "own" their health records any more than they own the vast troves of data that retailers, financial institutions, and government agencies collect about them, says Dr. Josh Landy, a physician and co-founder of Figure 1, a text-messaging app for healthcare professionals. Instead of ownership, healthcare professionals and patients should discuss electronic patient data in terms of "stewardship," he says. Although the creator of the record -- such as a hospital or physician's practice -- controls the record and data, patient data has multiple stewards.
Complete records might well include a combination of handwritten medical notes scanned as PDFs into a patient's file; information manually or electronically entered from monitoring and collection tools such as stethoscopes and scales; and data entered directly into the EHR. And the picture is going to get more complex. Soon, electronic records might collect data from wearable devices -- purchased as consumer gadgets -- that gather health data around the clock.
In addition, consumers often see a variety of healthcare practitioners. Each one -- primary care doctor, orthopedic surgeon, hospital doctor, or psychiatrist -- typically uses the referring doctor's record and creates a copy appended to his own electronic health record for the individual.
With all this sharing, what if a patient has a diagnosis he doesn't agree with or doesn't want shared? Can he contest, say, a diagnosis of alcoholism?
"We have to give due course to the patient," says Richard Rosenhagen, assistant VP for EMR/HIM/CDIP at South Nassau Communities Hospital. "If you're not transparent, you're going to end up in a bad place." The hospital has a process for discussing such conflicts with patients and making their disagreement part of the record, though the diagnosis remains. "If they disagree with what's in there, they have a right to voice their opinion," he says. "That disagreement doesn't give them the right to amend the record."
Incorporating more patient-driven data changes will present a whole new set of challenges for health IT professionals.
One reason is that, as a rule, consumers are "horrible historians," says John Hoffstatter, a physician's assistant and delivery director of advisory services at CTG Health Solutions. People forget to bring in a list of current medications or don't know why they take a particular pill. Having patients read through their electronic record is essential to improve care and reduce costs, he says.
How do you handle network issues? If you’re like most small businesses, you wait until something breaks or goes wrong before getting an IT services company on the phone. At a glance, it makes sense. Why pay to fix something if it isn’t broken?
Sadly, this way of thinking can do more harm than good, and it has taken many businesses out of commission.
When you get right down to it, there are two primary ways to handle network security:
One of these costs significantly more than the other and can destroy a business. You can probably guess which one we’re talking about.
When you’re reactive with your IT services, which includes data security, it means something bad has already happened.
There are many different things that can harm your data and your business, like an employee accidentally downloading malware onto their computer, you getting hit by a data breach, or a power surge occurring late in the night after a thunderstorm hits.
However, being reactive basically opens the door to these threats. It’s the one mistake that can put you out of business for good.
Hackers, for example, are a HUGE threat to small businesses. These cybercriminals will stop at nothing to break into your network to steal whatever they can get their hands on or do whatever damage they can. These people don’t care if their actions put you out of business.
This is why you cannot rely on a reactive approach to your IT services. When you do, you’re a step behind hackers, malware, and even natural disasters and equipment failures.
In the past, IT services were very reactive. They were built on the break-fix model, which is exactly as it sounds. A business would wait for something to break or go wrong before calling an IT services company for help to fix it.
In the 1990s and even into the 2000s, the break-fix model had its place and it worked. But as technology improved and it became easier for even the smallest businesses to stay ahead of the curve, the break-fix model stopped making sense.
The number of external threats has increased dramatically over the last 10 years. There are countless malware programs floating around on the Internet, and hackers are working 24/7 to wreak havoc.
Today, IT services companies can predict threats. They can stop attacks in their tracks and protect your business and your data. This is called managed services — and it could save your business.
When you work with a managed services provider, you can state exactly how you want to be proactive.
Do you want your network monitored for threats 24/7? Do you want them to have remote access to your networked devices so they can provide instant support to you and your team? They can do all of that!
A good IT services company can help you make sure all your data is backed up and secure. They can make sure external threats are spotted before they become a problem. They can make sure phishing e-mails don’t expose you to harm. They can even help you keep an awareness training program for your employees. The list goes on!
If you’re already working with an IT services company and they’re only providing outdated break-fix support, it’s time to say, “Enough!” Because, as many businesses have learned, waiting to make that call can be devastating!
Want more information about complete managed IT services for your business? Know more here
Also, take a look at our managed IT services plans: From 24/7 assistance, technology vendor management and hardware/software set up, to a complete Cybersecurity framework that includes awareness training for your employees.









