
The seemingly endless stream of Internet Explorer security flaws continues. Is there an end in sight?
Microsoft today released its September Patch Tuesday update, with Internet Explorer topping the list of vulnerabilities.
This month, Microsoft is patching 37 vulnerabilities in IE, of which 36 were reported privately and one was publicly disclosed. Ross Barrett, senior manager of security engineering at Rapid7, told eWEEK that the September IE patch addresses one publicly disclosed issue identified as CVE-2014-7331, which is under limited active attack.
"An information disclosure vulnerability exists in Internet Explorer which allows resources loaded into memory to be queried," a Microsoft security advisory states. "This vulnerability could allow an attacker to detect anti-malware applications in use on a target and use the information to avoid detection."
The CVE-2014-7331 issue is also particularly noteworthy because it is a different type of vulnerability than the other 36 that Microsoft is patching in IE this month, which are memory corruption issues. "Remote code execution vulnerabilities exist when Internet Explorer improperly accesses objects in memory," Microsoft stated in its advisory. "These vulnerabilities could corrupt memory in such a way that an attacker could execute arbitrary code in the context of the current user."
Craig Young, security researcher for Tripwire, told eWEEK that the CVE-2014-7331 vulnerability is also noteworthy in how it actually exploits a system.
"Unlike most IE information disclosures which are used to bypass ASLR [Address Space Layout Randomization] through memory address disclosure, this vulnerability utilizes a special URL scheme which allowed crafted Websites to determine if specific libraries are available," Young explained.
Young added that the presence or lack of a particular library is used to infer details about the target system's configuration, such as which security tools are installed.
"Armed with this information, the exploit kits can more carefully select which, if any, payload can be used without triggering endpoint protection," he said.
The September patch haul for IE overall is higher than it was for the August patch update, when 26 vulnerabilities were patched. As is the case this month, the bulk of the vulnerabilities were memory-related issues. Whether or not Microsoft can ever completely plug memory-related flaws in IE is a question that is difficult to answer.
"It sure doesn't seem like an end is in sight, does it? I've heard no indication that it is," Barrett said. "I think in practical terms, this has to trail off sometime, when most of the code base has been overhauled and all the use-after-free type issues have been addressed. However, I don't know when that will be."
While memory corruption issues are likely to remain a concern for some time, Microsoft is taking proactive steps to improve IE security overall. With the August Patch Tuesday update, Microsoft first introduced the capability in IE to block out-of-date ActiveX plug-ins in the browser. At the time, Microsoft said that the blocking feature would not become active for 30 days. Those 30 days are now up, and ActiveX blocking is part of the IE update.
"Applying strict controls around the use of out-of-date software is virtually a surefire way to increase the security of any system," Young said.
Young noted that one of the things that makes Google's Chrome browser robust from a security perspective is Google's attitude toward out-of-date plug-ins and browsers. Chrome has taken steps for a while to prevent users from activating out-of-date Java or Flash components.
"While these types of changes are not the enterprise-friendly policies we tend to see from Microsoft, it is a wise move in the right direction and certainly raises the bar for IE security,” Young said. Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
View the original article here
Almost all CEOs and business owners believe their company’s IT department with play a critical role in their business in 2015, a Modis survey found.
CEOs believe IT will play a critical role in success, with company leaders and business owners trusting their IT senior leader, with the overwhelming majority believing their chief information officers (CIO) hires the right IT talent for their business, according to a report from Modis.
Nearly all CEOs and business owners trust their head of IT to appropriately invest in technology that will grow their business (98 percent).
"The CEO and CIO have become extremely close business partners in recent years due to impact technology has had on business results," Jack Cullen, president of Modis," told eWeek. "With the ever-growing concerns of network security, mobility--the opportunities and the risks--Big Data, BYOD, and web development, the CIO is now a key steward on the CEO's leadership team."
Cullen said that instead of being aware of the CIO's plans on how technology is being used, the CIO has become a true business confidant and advisor. In fact, almost all (93 percent) CEOs and business owners believe that their company’s IT department with play a critical role in their business in 2015.
The survey also indicated CIOs are successfully articulating their value, with CIOs are making sure that other departments understand the role they play, according to CEOs and business owners, as nearly all (98 percent) say they adequately articulate the role IT plays at their company to other departments.
Business chiefs also appear to feel IT is mostly on par or above expectations--nearly all CEOs and business owners believe their IT department is either delivering on par (51 percent) or above expectations (36 percent) when it comes to their return on investment. Only 7 percent do not believe they are meeting expectations.
As such, it comes as no surprise that nearly half (48 percent) intend to increase their IT budget in 2015, with almost as many keeping it the same (46 percent). Only a few (5 percent) are planning to decrease IT budgets in 2015. Areas of the business receiving high dollar investments include infrastructure upgrades that focus on speed to market, security, and mobility.
Data protection and security around sensitive customer data, Big Data initiatives, and continued spending on the customer’s web site to capture increased market share or improve the company's brand, are all areas where business executives are focusing their attention. Cullen also noted conflicts can arise between other C-suite executives like chief marketing officers (CMOs) and the CIO, stemming from disputes about strategy and IT investments.
"A CMO may be hell bent on a certain product or tool that the CIO just does not wish to embrace," he said. "While these conflicts do play a part in many companies and in many departments, I have seen the decision of the CIO trump that of many of the other C levels in the organization."
View the original article here
New Internet domain extensions keep increasing. Registering for one is pricey, but most businesses can't afford not to.
For a long time, only 22 generic top level domains (gTLDs), such as .com, .org, and .edu, were available for businesses looking to set up shop online. But since opening up applications for new gTLDs in 2011, ICANN (The Internet Corporation for Assigned Names and Numbers) has approved applications for approximately 700 new gTLDs to expand the number of available namespaces on the Internet and provide more options to businesses of all sizes.
Applying for a new gTLD costs a company $185,000, plus a yearly fee of $25,000 if it's approved.
As the expansion rapidly continues over the next few months, it’s likely the Internet will look like a different place, and businesses shouldn’t wait until then to prepare. Consider the following steps for navigating the complex industry of top level domains.
Understand the issue at hand
It's difficult to talk about how the domain expansion might unfold without making the distinction between generic top level domains (gTLDs) and branded top level domains.
[New generic top-level domains rolled out earlier this year, with hundreds more coming. Read New Domain Names For Sale: 4 Facts.]
Many of today's top brands have been quick to apply for and secure their own branded top level domains, even if they're not sure what they're going to do with them. Canon for example owns .canon, Bank of America owns .bofa, and Oracle Corporation owns .oracle. However, for companies that did not get involved in the costly process of securing a branded domain name during the first round of applications, the best option is to take advantage of generic TLDs.
Extensions like .app, .club, .trade, or .vacations that are or will be made available to the public are just some of the over 400 generic terms that most businesses should be researching further. Many web suffixes have already been made available, but the most sought-after extensions are still yet to come.
In the coming months, ICANN will be making decisions on the most applied-for and hotly contested TLDs. Currently available TLDs are of great interest to many industries, but some of the most highly-anticipated extensions like .blog and .art have yet to be released due to multiple applicants vying for ownership.
ICANN will continue to adjudicate ownership as more domains join the ranks of the ones already available. Companies should identify the relevant, available domain names and the ones being made available so they can intelligently discuss the potential each one has on impacting the business.
Are new gTLDs right for the business?
The assumption by ICANN is that new gTLDs are right for a business. However, some businesses are going to be just fine using the traditional .com domain name. If the brand is unique or well-known, customers or users are going to find you either way. However, a brand like weather.com, for example, might decide that because its name is such a generic term that new TLDs can provide an additional way to strengthen its brand online. It could benefit from promoting its apps at weather.app, host its radar screens at weather.maps, or even create an educational site at weather.wiki.
In fact, we know this is something of value to weather.com because it has actually registered its trademark so that it gets an early opportunity to secure the extensions it wants before public availability. Remember, if you don't get the name when it's available, there is no guarantee it will still be there when you want it. If the answer is "yes these could work," make sure you get them before it's too late.
Stay educated on domain name availability
Once you've identified the domains that are relevant and beneficial to your business, it's critical to stay educated on their availability. As mentioned earlier, many new gTLDs are available and ready for purchase. However, there are several stages of domain availability that need to be considered. Registrars will offer a sunrise period for trademark holders to register first, sometimes followed by a limited land rush period, and then finally general availability.
Signing up for newsletters and ICANN alerts for domains that have been identified will ensure a business doesn't miss out. ICANNWiki is another great source to see which domains have gone live, which ones are still to come, and to track any progress on the decisions ICANN is making that might affect domain availability.
Register like it's 1995
The next logical step is to preregister as soon as possible for all the domains that will work for your business, much like when the Internet was in its infancy and the most popular domain names were picked up quickly by those thinking ahead.
While there are significantly more extensions this time around, the same rule applies and once the domain is gone, it's gone. The only way to get it back might come with a hefty price tag down the line, and it's worth making the minimal investment now to avoid fighting for it later. This is especially important for brands with generic names like Hotels.com that are at risk for trademark infringement.
While extensions like .design and .menu intuitively make sense, the fact is most people are not accustomed to seeing new domain extensions on the Internet when searching for information. Yet as more and more domains are used by the brands consumers interact with, this is bound to change quickly. Companies should get on board now so that in a few months they can avoid the crush of domain-seeking companies that didn't act sooner.
View the original article here
A botnet that infects and exploits poorly-maintained Linux servers has been used to launch a spate of large DDoS attacks targeting DNS and other infrastructure, Akamai’s Prolexic division has warned.
Dubbed the ‘IptabLes and IptabLex botnet’ the attack target versions of Apache Struts and Tomcat, as well as some running Elasticsearch that have not been patched against a clutch of vulnerabilities.
Once compromised, the attack elevates privileges to allow remote control of the server from which the malicious code is dropped and run, after which it awaits direction by the bot’s command and control. The binary connected to two hardcoded addresses running on China Telecom, while anyone whose server has been infected will probably notice poor performance.
The bot had been used to launch a number of DDoS attacks during 2014, including a significant one that reached a peak of 119Gbps, on entertainment websites.
Corralling Linux servers for DDoS is a relatively new tactic and this particular campaign appeared to be in its early stages and prone to instability, Akamai said, urging admins to patch and harden vulnerable Linux servers as soon as possible.
"We have traced one of the most significant DDoS attack campaigns of 2014 to infection by IptabLes and IptabLex malware on Linux systems," said Akamai senior vice president and general manager, Security Business, Stuart Scholly.
"This is a significant cybersecurity development because the Linux operating system has not typically been used in DDoS botnets. Linux admins need to know about this threat to take action to protect their servers."
In Akamai-Prolexic’s view, the gang behind this malware was likely to expand their targeting of vulnerable Linux servers, as well as broadening the list of targets.
Detection and remediation? Antivirus doesn’t appear to be a reliable option – only two out of 52 engines picked it up as of May 2014 when the firm started monitoring the threat and by September that had only risen to 23 out of 54. Victims will, of course, notice IptabLes or. IptabLex running.
Prolexic has published a method involving a pair of bash commands and a reboot, so getting rid of this isn’t hard. As for mitigation, the firm recommend rate limiting and has added a rule to the YARA open source tool for good measure.
The most important message is still the need to patch. Don’t leave Linux to rot.
View the original article here
If there's one thing that you can learn from today's marketing trends, it's that visual content sells. Images and videos are a powerful marketing tool that should be taken advantage of at all costs, but an infographic can also help you keep a balance between content and visuals. If you can master the art of the infographic, your business will soar high above your competitors.
Integrating infographics into your marketing strategy is easier said than done. Often it requires someone dedicated to graphic design, or the use of expensive software. Often times, infographic-specific software has a high learning curve, making it difficult to learn in a pinch. One way you can achieve all of the benefits of a professional infographic without all of the work (well, most of the work) is with Microsoft PowerPoint. In order to make the most of PowerPoint's infographic potential, you must know how to use three key elements: Text, Picture, and Shape. You will use four tools to edit these three elements: Fill, Line, Effects, and Styles.
Fill: this is the primary color of the object or text, signified by the bucket-type icon.Line: determine what color the outline of an object or entity is with this command.Effects: there are several pre-built effects you can use to give your infographic elements shadows, outlines, and the like.Style: similarly, there are pre-built styles you can use to make good-looking infographics with minimal effort. These can be used for colors, lines, and effects.
Choosing Your Color Scheme
Infographics require a fairly specific color scheme in order to be most effective. You should use four colors at the most, as any more can distract the reader to the point of confusion. There are several shapes, fonts, and clip art images available through Microsoft PowerPoint itself, but don't be afraid to upload and use any photos of your own (keep it simple, though).
Additionally, you can make custom shapes and images to drive the point of your text home. You can change the fill and line of your selected shapes by double-clicking the shape, or in the toolbar at the top of the product. If you are trying to break up ideas into different sections, change up your style to signify a change in content.
Text and Font Size
What's a good infographic without some glaring statistics? When it comes time to display lots of information, pick an interesting font style and go to town on it. Use around three colors and a consistent font for maximum results. Avoid leaving too much white space; it's called an infographic for a reason. If there isn't anything to look at (or it's spread too far apart), your infographic might not resonate with the audience as well as it should. Here are more tips you can try:
Use alternating colors to put an emphasis on particular words. This makes sure that people know what's important in the statistic.Use many different kinds of shapes to create custom graphics. The possibilities are limitless to express your ideas or represent the statistic that makes your point.Large numbers work well for statistics. If you are trying to make a point about a statistic, its size should be commensurate with what it's actually worth.Graphs cause people to lose interest. Instead of using graphs, try using pictures to explain the point.
By taking advantage of these variables, you'll be sure to throw together a powerful infographic that will knock the socks off of your audience.
It can be difficult to keep track of your budget and expenses, especially when prices and needs are always changing. But perhaps the biggest annoyance is the intense paper trail that you leave behind when building your budget. By taking advantage of Microsoft Excel's formulas, you can easily keep track of your budget and alter it as prices change and demand increases.
Mathematical Orders
Excel operates similarly to a calculator, and it has several mathematical functions that it can run:
Addition: +
Subtraction: -
Multiplication: *
Division: /
Exponents: ^
In order to initiate a calculation sequence, begin the formula with an equals sign (=). The cell which holds the formula will equal the result of the calculation. For example: =5+3 would create the number 8 in the selected cell.
Cell References
While you can enter Excel formulas into cells manually, you can also use cell addresses to enter a formula into a spreadsheet. Cell addresses generally use a combination of letters and numbers to determine the location of the cell on the chart. If you take a look at the columns and rows, you'll notice they are marked with letters and numbers, and these determine the address of the cell. By using a combination of cell addresses when entering formulas, you can guarantee accurate results for calculations. This is imperative when working with a strict budget. For example: =A3*A5 or =B2-A1
These cell addresses are used to represent the value of the entered cell. You can also use a combination of cell addresses and set values, like so: =C6/2 or D1*4 or B6^2
Brewing Formulas
It has never been easier to build a basic budget spreadsheet in Microsoft Excel 2013.
Select a cell. This is the beginning point for the formula. We'll use B3 as an example.Enter a formula into the formula bar at the top of the spreadsheet. Notice that the data entered into the formula bar will also appear in the selected cell.Type the cell address of a cell that is to be used in the formula. Let's say that cell B1 has the information for a budget surplus from December 2013, while B2 has January 2014's budget. Those numbers will be added together. All you need to do is type B1 into the formula bar, and the cell will gain a blue border. This indicates that the cell will be used in the formula.Type the address of another cell with a mathematical operator. For example, we want to add B1 and B2. The second reference cell will gain a red border.Press Enter. The formula will be calculated and place the value in the selected cell. If the formula is too big to display the selected cell, it may appear as pound signs (or, for those who are unfamiliar with this enigma, it can also be called a #hashtag). To fix this, increase the width of the column.
Easily Modify Values and Formulas
Perhaps you have decided to increase the marketing budget for the next year, and you need the spreadsheet to represent this change. It's as easy as changing the value in one cell. Excel will automatically recalculate the value of the formula after you edit one of the cells. It's important to make sure that the calculation is correct, though, since Excel will not inform you if the recalculated value is invalid or not.
Optimizing your use of spreadsheets can contribute to more productivity in the long run for both your business and your employees. You can plan for the future and get ahead on your company's asset management. You'll be surprised by how much time you can save by taking advantage of the latest Microsoft Office applications.
The theft by a Russian syndicate of 1.2 billion username and password combinations from 420,000 websites around the world means that the personal details of almost half of all users of the internet must now be considered severely compromised. It can be only a matter of time before the victims find nasty surprises in their bank statements and credit-card accounts. To be on the safe side, anyone who uses financial and shopping websites should change their passwords forthwith—preferably to something longer, more jumbled, and including no word found in any dictionary. The more nonsensical the better.
Heads may nod in agreement, but the advice is then promptly ignored. Human nature, being what it is, has a habit of making people the weakest link in any security chain. For instance, passwords that are easy to remember—the ones most people choose—tend to be the easiest for cybercrooks to guess. By contrast, passwords comprising long, random strings of uppercase and lowercase letters plus numbers and other keyboard characters are far more difficult to fathom. Unfortunately, they are also difficult to remember. As a result, users write them down on scraps of paper that get left lying around for prying eyes to see.
Basically, two factors determine a password’s strength. The first is the number of guesses an attacker must try to find the correct one. This depends on the password’s length, complexity and randomness. The second factor concerns how easy it is to check the validity of each guess. This depends on how the password is stored on a website’s server.
Taking the second factor first, any computer system that requires users to be authenticated when logging-on stores the various passwords in a database. Because such tables can be stolen, passwords are normally encrypted in the form of a “hash” of the user’s authentication details rather than in plain text. A cryptographic hash is a string of characters created from the original plain text by an algorithm (such as MD5 or SHA-1), from which it is supposed to be impossible to recreate the original. When a user enters a password, it is hashed using the same one-way algorithm, and the output is then compared with the hash stored in the database.
The strength of a password therefore depends, to a large extent, on the hashing function used, and how well the database containing all the password hashes is protected. Such things are normally outside the user’s control, depending instead on the integrity of the online banking or retailing firm’s website security.
What is very much under the user’s control is the length, complexity and randomness of the password chosen. Several years ago, an eight-character password was considered more than adequate. Using cracking computers of the day, it would have taken a couple of years to break such a password by brute-force methods—more than enough to deter most criminals. Today, ten characters has to be considered the absolute minimum length.
For its part, the complexity of a password depends on the size of the character set it is selected from. The wider the choice, the greater the complexity—and thus the better the security. Using numbers alone limits the choice to just ten characters. Add upper- and lower-case letters and the complexity rises to 62. If all the symbols in the standard ASCII set of printable characters are available, the pool to choose from increases to 95.
By contrast, the randomness of a password depends largely on whether it was created automatically by a random-number generator (better), or by the user making a less-than-arbitrary choice (worse). Either way, randomness is measured by its so-called “entropy”—its degree of disorder. In information theory, a tossed coin is said to have an entropy of one bit (ie, one binary digit). That is because it can land randomly in one of two, equally possible, binary states.
Each time an extra bit of entropy is added to a password, it doubles the number of guesses needed to crack it. Thus, a password with 64 bits of entropy is as strong as a string of data comprising 64 randomly selected binary digits. Put another way, a 64-bit password would require 2 raised to the power of 64 attempts to crack it by brute force—in short, 18 billion-billion attempts.
That may sound astronomical, but a 64-bit password was cracked in 2002 using brute-force methods. It did, nevertheless, take a network of volunteers nearly five years to do so. However, given the sort of equipment available today, a 64-bit password could be cracked in months.
Two things have changed in recent years to make even strong passwords vulnerable. One is that computers have got a whole lot faster. This is not just the effect of Moore’s Law—the doubling of processing power every two years or so. There has also been a quantum leap in the computational performance of PCs, thanks to the massive parallel processing made possible by the graphics processing unit (GPU) embedded in video cards. When used to crunch numbers instead of drawing complex shapes, colours, textures, highlights and shadows that change rapidly in a video game, a modern graphics card costing less than $1,000 can turn a humble PC into a desktop supercomputer.
Unlike a computer’s central processing unit or CPU, which executes single instruction threads in rapid succession, a GPU’s parallel architecture allows it to execute many threads simultaneously. Some of the latest graphics cards offer the performance equivalent of a CPU with several thousand processing cores. When used with cracking software optimised for parallel processing, attackers can make billions of guesses a second using nothing more than a high-end gaming PC. One modified machine fitted with eight graphics cards is claimed to make over 140 billion guesses a second.
The second thing that has changed is that hackers with malicious intentions no longer rely solely on brute-force methods that try all possible combinations of characters in order to guess a password correctly. These days, they can buy black-market dictionaries of common passwords, along with all their imaginable variants, that run into a billion or so entries. Such dictionaries are used to create tables of pre-generated hash values. Lists of these pre-generated hashes are stored in so-called “rainbow tables” for mounting attacks.
By trying these first, all the low-hanging fruit in a stolen hash table can quickly be unscrambled. As a rule, attackers can usually decipher at least half of the hashes in a database in 5% of the time it would take to do the lot. Weighing time against results, many attackers cease after unscrambling 80% or so of a stolen database.
What can individuals do to protect themselves? Apart from choosing passwords that are strong enough (ie, long, complex and random mixtures of ASCII characters) to make cracking their hashes too time consuming for thieves to bother with, there is actually not all that much more. Passwords get stolen and broken mainly because of poor choices made by those responsible for a website’s security—especially the way it stores customers’ validation details.
Even when passwords are hashed, the most popular algorithm remains MD5. Yet, this has long been known to have a fundamental “collision” flaw that is easily exploited. The other widely used hashing function, SHA-1, is little better. More robust hashing algorithms exist (eg, bcrypt, scrypt, SHA-512 and PBKDF2) that make life difficult for would-be thieves. Among other things, these stretch out the hashing process by repeating it thousands of times—slowing, in the process, all decryption attempts to a snail’s pace.
Another useful defence is to “salt” each password with a different random number before hashing it. An attacker pre-generating a rainbow table then has to store the hashes of every conceivable salt value for each and every password in the dictionary used. For a salt value of more than, say, 32 bits (2 raised to the power of 32), cracking such a salted hash table in any reasonably amount of time is nigh impossible with today’s technology. Even so, few commercial websites use salting, let alone stretching, to protect their customers’ logon details.
Given the pace of innovation in graphics processors, coupled with the increasing power of cracking software (mostly available for free on the internet), even the best password defences are destined to be overwhelmed in due course. After two thousand years of development, the password’s days would finally seem numbered. Time to start investing in spoof-proof biometric factors that characterise each person uniquely as an individual.
http://www.economist.com/blogs/babbage/2014/08/difference-engine-1?fsrc=scn/tw/te/bl/youvebeenhacked
Over a thousand major enterprise networks and small and medium businesses in the U.S. have been compromised by a recently discovered malware package called "Backoff" and are probably unaware of it, the U.S. Department of Homeland Security (DHS) said in a cybersecurity alert on Friday.
Backoff first appeared in October 2013 and is capable of scraping the memory contents of point of sales systems -- industry speak for cash registers and other terminals used at store checkouts -- for data swiped from credit cards, from monitoring the keyboard and logging keystrokes, from communicating with a remote server.
"Over the past year, the Secret Service has responded to network intrusions at numerous businesses throughout the United States that have been impacted by the "Backoff" malware," the alert said. "Seven PoS system providers/vendors have confirmed that they have had multiple clients affected."
The malware is thought to be responsible for the recent data breaches at Target, SuperValu supermarkets and UPS stores, and the Secret Service is still learning of new infections.
DHS first warned of Backoff in late July, when it noted the malware was not detectable my most antivirus software. That made it particularly difficult to stop, because much of the fight against computer viruses and malware rests on antivirus applications.
Most antivirus packages now detect Backoff, but DHS is advising network operators take immediate action to ensure they haven't been affected.
"DHS strongly recommends actively contacting your IT team, antivirus vendor, managed service provider, and/or point of sale system vendor to assess whether your assets may be vulnerable and/or compromised," it said. "The Secret Service is active in contacting impacted businesses, as they are identified, and continues to work with and support those businesses that have been impacted by this PoS malware."
In many cases, hackers gained access to machines through brute-force attacks on remote log-in systems offered through companies like Microsoft, Apple and Google and other third-party vendors. Once inside, they were able to copy the malware to the machine and set it capturing credit card data.
The DHS asked that instances of it are reported to a local Secret Service field office.
The Target data breach was one of the largest in recent memory, resulting in tens of millions of credit and debit cards being compromised. In the last couple of weeks, SuperValu said that at least 180 of its stores had been hit by a data breach and earlier this week UPS said 51 of it UPS Store locations had been hit.
View the original article here
State of Montana notifies 1.3 million patients of breach to Department of Public Health and Human Services server.
Hackers breached a server in the State of Montana's Department of Public Health and Human Services, prompting officials to notify 1.3 million people of the incident.
There is no evidence this information was used inappropriately -- or even accessed -- but the state is offering free credit monitoring and identity protection insurance to potentially affected individuals, said Richard Opper, DPHHS director. Montana also is alerting family members of deceased patients.
Officials discovered the breach after an independent forensic investigation determined a DPHHS server had been hacked. The department ordered the May 22 investigation from Kroll after DPHHS officials first noticed "suspicious activity" on May 15, Jon Ebelt, DPHHS public information officer, told InformationWeek.
Since the breach, DPHHS has "taken several steps to further strengthen security, including safely restoring all systems affected, adding additional security software to better protect sensitive information on existing servers, and continually reviewing its security practices to ensure all appropriate measures are being taken to protect citizen information," according to the release. For security reasons, DPHHS declined to expand on these additional measures.
The time gap between the initial breach and the detection, while outrageously long, is far from being a rare occurrence. In fact, once mission-driven attackers have established a stable beachhead they leverage legitimate existing network resources, like user credentials, for the next phases of the attack. They thus render traditional security controls, like AV, firewalls, and sandboxes useless. With no system in place to monitor the internal network in real-time, attackers are effectively allowed to explore, compromise and exploit the network at their leisure.
The health department notified both Federal Bureau of Investigation and the Montana Attorney General's Office of the breach, said Ebelt.
No information about any potential suspects was available.
Although many healthcare breaches have historically resulted from employee carelessness or error, hackers are increasingly attracted to this industry's rich stash of personal data -- including Social Security numbers, credit card information, and addresses -- and personal health information, experts said. In its 2014 Data Breach report, Verizon determined physical theft and loss, insider misuse, and miscellaneous error accounted for 73% of healthcare breaches.
Michael Raggo, security evangelist at MobileIron, told InformationWeek last month:
I will never say never, but the healthcare industry has seen a disproportionately low instance of cyberattacks, and rather a higher proportion of accidental data loss through well-intentioned but risky user behaviors on the device or lost devices. A major reason for a low instance of cyberattacks is because stringent HIPAA guidelines are a core part of the data security and compliance strategy of all healthcare organizations in the United States. That said, cyberattacks are increasing, as are the number of attack vectors organizations need to protect.
In mid-May, the Office for Civil Rights (OCR) posted 61 new breach incidents affecting more than 500 patients, bringing the 2014 tally to 992 organizations and more than 31,000 patients. More than one third were attributable to theft, and unauthorized access/disclosure accounted for about 15%.
A search of OCR's database reveals only a handful of hacking incidents in 2014. In April, DeKalb Health's website was compromised when the service provider operating the Indiana provider's website was targeted by an overseas hacking group. Hackers created a fraudulent page made to resemble the legitimate site of the DeKalb Health Foundation, a non-profit organization, and sent phishing emails seeking donations. Hackers also defaced DeKalb's website to link to the fake site.
During its investigation, DeKalb discovered that several patient databases were housed on the affected server, notified patients, and provided one year of free monitoring services.
Also in May, Centura Health fell victim to a phishing scam after hackers reportedly targeted employees at the non-profit division of Mercy Regional Medical Center. The organization notified about 1,000 patients whose information may have been compromised when hackers might have gained access to personal information including Medicare beneficiary numbers, Social Security numbers, and dates of birth. An external forensics firm confirmed this data could have been compromised.
When a database must stay tucked away in an enterprise data center, running a dependent service in the public clouds spells "latency." Or does it?
Most hybrid architectures propose moving only certain application building blocks to the cloud. A prime example is a Web front-end on Amazon Web Services and a back-end transaction server in an on-premises data center. But a new wave of hybrid thought says it's OK to separate data from applications, even to allow multiple applications to run from multiple cloud provider sites against a common data source that can be located in a company's data center.
Not so long ago (like yesterday) that suggestion might, at a minimum, earn you an eye roll and a dismissive hand wave. In more extreme cases, we're talking a recommendation for an IT straightjacket. Separating data from applications and putting half in the cloud would be nothing short of insane. The latency and security problems will kill you, right?
Not necessarily. Separating data services from application services is nothing new; we do it all the time in data centers today. However, we typically provide high-speed links between the two to avoid latency, and, since it is usually all locked in a secure data center, security is less of an issue. Some organizations actually split services between two data centers, but in the process have built the dedicated network bandwidth necessary to support the traffic and ensure adequate performance. In that situation, security is managed by keeping it all behind the firewall, limiting access, and encrypting data in motion.
I equate the first situation to having a private conversation with my wife while we're in the same room. The second is having the same conversation, but now we're yelling between rooms within our home (more likely this is the model for a conversation with my teenage daughter). In this situation, the communications may not be ideal, and the risk someone else may hear is higher, but it's still a somewhat controlled environment.
Complex or not, hybrid cloud is popular in enterprises.
When it comes to moving application services to the cloud and leaving the data in the data center, some people would claim an appropriate analogy is using a bullhorn to have a conversation with your teenager while you are home and she is at the mall.
New technologies and cost models, however, offer an alternative to connecting services over the public Internet. One contributing factor is the ability to get cost-effective, high-speed network connections from cloud provider sites to data centers, of sufficient quality to minimize latency and guarantee appropriate performance. In addition to services offered directly by hosting vendors, colocation companies are jumping into the fray to offer high-speed connections between their colo data centers, as well as from these locations directly to some cloud provider sites.
[Hybrid Security: New Tactics Required. Interested in shuttling workloads between public and private cloud? Better make sure it's worth doing, because hybrid means rethinking how you manage compliance, identity, connectivity, and more.]
For enterprise IT, there are three things to consider when deciding whether this model is for you and considering options for high-speed network connections.
1. Security: While data may have traveled only behind the firewall in the past, now it is stepping out over a network, potentially among multiple cloud locations. Keeping this information safe in transit and dealing with access-control issues will be critical, as discussed in depth in this recent InformationWeek Tech Digest. Heck, in a post-Snowden world, the business may even care. Settling compliance issues, such as encrypting data before it goes over the wire and making sure data doesn't travel outside the appropriate geographic borders, is also critical before you provision any new connections.
2. Network bandwidth and performance: Depending on the application's sensitivity to latency, the cost to guarantee a given performance level over the network may eliminate any savings gained in moving the application to the cloud. My advice: Know exactly what you're getting before committing. SLAs need to be clear, as do penalties, and make sure you have monitoring tools to validate performance. It's particularly important to look not just at the network specs but the real-world, end-to-end performance. Benchmark over a period of time before moving the application to gain a base model. Run simulated transactions and workloads based on the benchmark; that will give you a sense of equivalent performance in the new environment. Recognize as well that if you are going into a public cloud, your mileage may vary on any given day. This is discussed in depth in the 2014 Next-Gen WAN report.
3. Business continuity/disaster recovery: The number of things that could go wrong just increased exponentially. Meanwhile, managing recovery plans becomes more complex. Map disaster scenarios prior to getting locked in, and budget for redundancy. Cable cuts and floods happen. Be prepared at the onset, rather than scrambling along with everyone else affected if a network connection is impacted. Recognize as well that you now have two different environments that may require two different plans for DR and have different recovery times. Of the three, this will likely be the most complex to work out.
While I've focused on the challenges, there are clearly benefits to be had if you need the scalability of a cloud environment for your applications. But just because we can doesn't mean we should. Much like raising a teenagers, each scenario is different, and you need to be up for the challenge before you head down that road.
The seemingly endless stream of Internet Explorer security flaws continues. Is there an end in sight?
Microsoft today released its September Patch Tuesday update, with Internet Explorer topping the list of vulnerabilities.
This month, Microsoft is patching 37 vulnerabilities in IE, of which 36 were reported privately and one was publicly disclosed. Ross Barrett, senior manager of security engineering at Rapid7, told eWEEK that the September IE patch addresses one publicly disclosed issue identified as CVE-2014-7331, which is under limited active attack.
"An information disclosure vulnerability exists in Internet Explorer which allows resources loaded into memory to be queried," a Microsoft security advisory states. "This vulnerability could allow an attacker to detect anti-malware applications in use on a target and use the information to avoid detection."
The CVE-2014-7331 issue is also particularly noteworthy because it is a different type of vulnerability than the other 36 that Microsoft is patching in IE this month, which are memory corruption issues. "Remote code execution vulnerabilities exist when Internet Explorer improperly accesses objects in memory," Microsoft stated in its advisory. "These vulnerabilities could corrupt memory in such a way that an attacker could execute arbitrary code in the context of the current user."
Craig Young, security researcher for Tripwire, told eWEEK that the CVE-2014-7331 vulnerability is also noteworthy in how it actually exploits a system.
"Unlike most IE information disclosures which are used to bypass ASLR [Address Space Layout Randomization] through memory address disclosure, this vulnerability utilizes a special URL scheme which allowed crafted Websites to determine if specific libraries are available," Young explained.
Young added that the presence or lack of a particular library is used to infer details about the target system's configuration, such as which security tools are installed.
"Armed with this information, the exploit kits can more carefully select which, if any, payload can be used without triggering endpoint protection," he said.
The September patch haul for IE overall is higher than it was for the August patch update, when 26 vulnerabilities were patched. As is the case this month, the bulk of the vulnerabilities were memory-related issues. Whether or not Microsoft can ever completely plug memory-related flaws in IE is a question that is difficult to answer.
"It sure doesn't seem like an end is in sight, does it? I've heard no indication that it is," Barrett said. "I think in practical terms, this has to trail off sometime, when most of the code base has been overhauled and all the use-after-free type issues have been addressed. However, I don't know when that will be."
While memory corruption issues are likely to remain a concern for some time, Microsoft is taking proactive steps to improve IE security overall. With the August Patch Tuesday update, Microsoft first introduced the capability in IE to block out-of-date ActiveX plug-ins in the browser. At the time, Microsoft said that the blocking feature would not become active for 30 days. Those 30 days are now up, and ActiveX blocking is part of the IE update.
"Applying strict controls around the use of out-of-date software is virtually a surefire way to increase the security of any system," Young said.
Young noted that one of the things that makes Google's Chrome browser robust from a security perspective is Google's attitude toward out-of-date plug-ins and browsers. Chrome has taken steps for a while to prevent users from activating out-of-date Java or Flash components.
"While these types of changes are not the enterprise-friendly policies we tend to see from Microsoft, it is a wise move in the right direction and certainly raises the bar for IE security,” Young said. Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
View the original article here









