I have been using the "Bash on Ubuntu on Windows 10" feature since it was released with the Windows 10 Anniversary Update. This is a fantastic addition that allows the user to run Linux programs natively in Windows, using the "Windows Subsystem For Linux" aka WSL. Beyond just enabling Linux binaries to run, Microsoft partnered with Ubuntu to bring the full package ecosystem of Ubuntu, meaning new Linux programs can be installed by using Ubuntu apt-get package manager. However it appears that unlike the rest of Windows it is not currently automatically keeping these packages up to date by default with security patches.
I'm using some customizations and an alternative shell (more on that in a future post), and today when I logged in I noticed that there were a number of new security updates available. While Ubuntu has an included unattended-upgrades package that is installed on WSL, it does not automatically run. This post documents one method for installing these updates: by creating a Windows scheduled task.
First I updated the sudoers configuration to allow apt-get to run as root without prompting for a manually entered password.
Add the following lines:
Next, I tested this by running the following command from the Windows Run box (Windows+R):
%windir%\system32\bash.exe -c "sudo apt-get update && sudo apt-get -y upgrade"
Once that was confirmed to work, I added a scheduled task. You can do this either through the Task Scheduler GUI, or the command line. I used the following command line and XML configuration so that I can reproduce it reliably in the future as needed.
From a Windows command shell:
schtasks /create /xml Task-LinuxUpgrade.xml /tn UpdateLinux
Finally, I tested this scheduled task by running it manually:
schtasks /run /i /tn "UpdateLinux"
Bash on Ubuntu on Windows is an excellent new feature of Windows 10, and hopefully this will help others who want to keep it up to date with security patches.
STEM makerspace in west St. Louis county county. If you live in the area or are just interesting in STEAM (Science, Technology Engineering, Arts and Math) check out their website and consider joining. I find a lot of overlap between open source hardware and software, electronics, innovation and the information security space, and I'm excited about being part of their membership.
Recently Didier Stevens wrote an interesting article and related SANS posting regarding Windows Backup Privileges. In it he presented a modified cmd.exe from ReactOS to allow asserting the backup privilege which bypasses traverse checking and file DACLs. I had looked into similar techniques as well, and his article prompted me to put together a video adding to what Didier has posted.
In addition to using a local process to access files using backup privilege, Windows supports asserting a "backup intent" over network SMB/CIFS connections. This allows someone on a remote system, using an account with backup or restore privileges, to read or write files and traverse file systems regarding of DACLs. Fortunately, there are already a couple of Linux tools (smbclient and mount.cifs) that already implement this capability. This can be extremely useful during penetration tests. I put together a video that demonstrates this capability below.
Also in follow up conversations on Twitter, there was some questions raised about using this functionality with native Windows commands. I have found it is possible to do using native PowerShell (using this script) and RoboCopy with the /B option. RoboCopy allows both reading (backup) and writing (restore) files, but does not work when the parent directory is inaccessible (therefore requires copying entire folders when a parent folder is inaccessible). The following screenshot illustrates this.
For future consideration, it should be possible to read and write files using native PowerShell; the library SeBackupPrivilege by giuliano108 does exactly that, but uses compiled .DLLs. With a little more time and PowerShell work it should be possible to accomplish the same thing in native PowerShell. Thanks to Didier for the interesting article, and I hope you find this additional information useful.
I've consolidated several posts about presentations here, the presentations are attached.
September 2005 St. Louis Infragard chapter on Positive Thinking in Information Security - Positive Thinking
January 2006 meeting of the St. Louis Information Systems Security Association (ISSA) - Security Architecture
August 2006 St. Louis Security Group, a SIG of St. Louis Unix Users Group - Inside Out Hacking
At BlackHat, security researchers Billy Rios and Nathan McFeters presented "The Internet is Broken" which contained information on GIFARs, a term meaning GIF image files combined with Java ARchives (JAR). These files could be uploaded to sites that allow image uploading (such as many site's member photos), to run code in the context of that site - getting around the "same origin policy" that browsers impose. This works because GIF images (along with many other file types) store their header in the beginning of the file, and ZIP archives (which is what JAR files are made of) store their data at the tail.
The folowing video demonstrates this technique.
(Originally published 2008-08-12)
Security researcher Moxie Marlinspike presented a new vector of attack against HTTPS using Man in the Middle (MitM) attacks against SSL web sites, and an updated version of his sslstrip tool (not yet posted), at BlackHat DC. You can watch the video of his presentation, and there is also an interview with Moxie and BlackHat organizer Jeff Moss. Although some journalists have gotten the point of his presentation right - such as this article by George Ou - many have not, and there is a lot of confusion about what Moxie actually presented. Below I present my analysis of his presentation and what it means to us as security practitioners.
The primary point of his presentation is that HTTPS as implemented relies on HTTP, which of course is an insecure protocol. When a user types paypal.com into their browser, the browser does a GET request for http://paypal.com, and the server returns a 302 redirect to https://www.paypal.com/us. The initial use of HTTP allows a man in the middle to strip all references to HTTPS from the response to the user - while still acting as a valid HTTPS client to the server. This reminds me of one of Gene Spafford's famous quotes:
Secure web servers [cryptographically enabled web servers] are the equivalent of heavy armored cars. The problem is, they are being used to transfer rolls of coins and checks written in crayon by people on park benches to merchants doing business in cardboard boxes from beneath highway bridges. Further, the roads are subject to random detours, anyone with a screwdriver can control the traffic lights, and there are no police.
In other words, Moxie presented a technique that exploits the implicit transitive trust between HTTPS and the related protocols that HTTPS is dependant upon. Not only does this include the clear-text HTTP and DNS protocols, but numerous others such as TCP, IP, ARP, and layer 1-2 protocols such as IEEE 802.3 and 802.11.
Marcus Ranum is one of the outspoken voices in the industry on the effects of transitive trust (and trust issues in general). He has an interesting post "AfterBites - 160 Illustrations of Transitive Trust" on the Tenable blog that summarizes his views on the subject. I also highly recommend listening to the Rear Guard podcast that he references in that article. In it he does a great job of breaking down the idea of trust in Information Security.
Unfortunately there is very little that a site could do to prevent such a MitM attack. For example, even if a site operator went so far as offering their site as HTTPS only, with no HTTP fallback (never mind the impact that would have on site traffic), it would still be vulnerable as the attacker could still listen for the initial HTTP request from the browser and still send a 302 redirect even though the site doesn't use 302 redirects.
The same thing goes for browsers - defaulting to HTTPS doesn't help, unless they remove the fallback to HTTP. As long as the browser is willing to fall back, an attacker would DOS the HTTPS connection. Obviously removing HTTP fallback isn't a viable option.
So what can be done? Browsers vendors should bring back user interface indicators (positive indicators) that a site is secure. Of the major browser vendors, only Google Chrome does a decent job at this - the others have actually removed some of the indicators. Further, Google Chromium offers a SSL-only browsing mode in the latest Chromium releases. In this mode Chromium only loads HTTPS sites, and only if there are no certificate errors. It is my hope that when Google releases this feature in the mainstream Chrome browser, they add a second shortcut on installation by default for Chrome (Secure Browsing Mode) to encourage users to take advantage of this feature.
To take that idea one step further, the browser vendors should offer a HTTPS only site opt in list or mechanism, perhaps similar to Safe Browsing extensions, that enforce HTTPS only for sites that desire that behavior. It wouldn't do any good to deliver this in real-time through DNS, as a MitM attacker would be able to remove or modify DNS responses as well. Although there have been some calls that DNSSEC would help, until there is universal acceptance and coverage, it won't help at all.
Finally, it is unclear but likely that client-site certificate validation would help. By validating client side certificates, web servers can be sure that they are communicating with the end browser, and not a MitM. However, client side certificates come with a whole host of issues that prevent general acceptance including difficulty for end users and MitM attacks against the enrollment process itself. Perhaps something could be done with these usability issues to make it a more viable option.
(Originally published 2009-02-23)
Back in April I presented at the St. Louis ISSA on wireless security issues. This included a wireless security overview, enterprise security issues, hotspot (guest) network security issues, and an introduction to Open Secure Wireless - a project I've been working on as a possible solution to unencrypted hotspot wireless. More on Open Secure Wireless in my next post.
The slides are attached.
(Originally published 2011-03-01)
Recently I ran across a scenario where the Microsoft Sysinternals tool PsExec would not work against a Windows 7 domain-joined computer. The command was failing with an "Access Denied" error. On Vista and newer, User Access Control (UAC) issues a restricted token to processes, but PsExec requires an elevated token. On the local system's Microsoft-Windows-UAC\Operational log the following event appeared: The process failed to handle ERROR_ELEVATION_REQUIRED during the creation of a child process.
Further research found that newer versions of PsExec have a command argument (-h) to specify elevated rights.
However, even with specifying -h PsExec was still failing with "Access Denied". After some digging, I discovered that it's all about how the authentication credentials are presented to the remote system. UAC has an exception for remote connections using domain credentials, so that machines can still be administrated remotely (otherwise, there would be no way to respond to UAC prompts). When connecting remotely and authenticating with NTLM using a domain account, Windows 7 issues an elevated token.
With PsExec when you specify the username on the command line it causes an explicit (local) authentication to occur on the remote system, and Windows issues a limited rights token, causing PsExec to fail. However, if you authenticate as the target user on the local computer (using RunAs or logging in directly), and then use PsExec with implicit (NTLM) authentication to the remote computer, the process gets the elevated token on the remote system and it works.
This behavior becomes more obvious when using telnet. The built-in Windows telnet client automatically authenticates using NTLM (top window in the screen shot below), and the user is given an elevated token. However, logging in with the same user from a third-party telnet client results in a restricted token (bottom window).
I hope this information is helpful to someone else wondering why their PsExec might be failing on Windows 7 due to UAC.
(Originally published 2010-12-11)
I am currently working on a major revision to my Open Secure Wireless project to incorporate changes introduced with IEEE 802.11u.
The changes to 802.11 are part of what the Wi-Fi Alliance is calling "Hotspot 2.0", which they plan to launch in 2012. It appears that the Wi-Fi Alliance and Wireless Broadband Alliance may be currently focusing this effort on mobile carriers and service providers rather than smaller open and public hotspots. However, the changes introduced in 802.11u could also be used to enable a hotspot to be both open and secure. I am referring to this project as Open Secure Wireless 2.0 (OSW2), and I encourage the Wi-Fi Alliance to consider adopting this as a component of Hotspot 2.0.
A paper (and hopefully code!) will be forthcoming soon, but read on for the background and overview of how this might work.
In May 2010 I posted a paper on "Open Secure Wireless" - a method by which wireless hotspots can be both open and secure simultaneously. A few months later Tom Cross and Takehiro Takahashi with IBM X-Force posted an article introducing similar method they called "Secure Open Wireless Access". Although we developed these approaches independently, both proposals utilize EAP-TLS without client authentication and other similar features. Since discovering that we were working on similar projects, we have been working together to try to get acceptance and deployment of secure and open hotspots.
Although both methods are RFC compliant and have been shown to work in lab testing, they still require modifications to the behavior of wireless authentication servers and supplicants. One of the biggest sticking points for most people is validation of the server certificate to prevent man in the middle attacks. Although this is an existing challenge with EAP-TLS and PEAP networks, the use of TLS in public hotspot networks exasperates the concern.
In my paper I suggested under “Future Work” that the SSID could be compared to the CN or SAN of the x.509 certificate, similar to how web browsers compare host name to CN and/or SAN. Unfortunately the SSID is limited to 32 octets, which may not be long enough for all domain names. IBM X-Force in their paper proposed an additional information element they called the XSSID (eXtended SSID) to support full length and international domain names.
Since then, IEEE released an update to the 802.11 standard called IEEE 802.11u-2011 in February 2011. Under the IEEE Get program, the 802.11u amendment became publically available six months later. It is published at the IEEE Get program page for 802.11.
802.11u defines several key changes to 802.11 that can help enable the possibility of open and secure wireless hotspots. These include an Interworking information element and several Access Network Query Protocol (ANQP) elements that, put together and combined with EAP-TLS without client authentication, may achieve this purpose.
The Interworking element can be used to advertise the availability of an 802.11u capable network, the access network type (for example, free public network), and that Internet access is available.
802.11u capable clients will query 802.11u capable APs for additional information using ANQP. This can include a request for an element called the Network Access Identifier (NAI) Realm list. The NAI Realm list includes one or more NAI Realms (defined according to RFC-4282) and optional EAP methods and authentication parameters to access associated with the realm.
Using this method, an Open Secure Wireless 2.0 hotspot would respond to the NAI Realm list request with the domain name associated with their public certificate, an EAP type of EAP-TLS, and a credential type of 8 - None (server-side authentication only).
Once the supplicant determines to associate with the network (either through pre-configured profiles or user interaction) and authentication begins, the following process would occur:
The user interface during this process is key. It is obvious that there will be UI changes for 802.11u clients for listing and connecting to hotspots. It is important that the domain name be prominent in the UI as this is the part that is verified against the certificate. As part of this research I will also be looking into using parts of validated Extended Validation (EV) information from EV certificates.
This process would allow a wireless client to establish a secure connection to an open public hotspot without authentication and without a prior trust relationship. Open Secure Wireless 2.0 leverages existing standards (EAP-TLS and 802.11 amended by 802.11u), and could, if adopted as part of the Wi-Fi Alliance Hotspot 2.0 program, be widely deployed by vendors. With Wi-Fi Alliance and vendor support, it could potentially improve the security of millions of public hotspot users worldwide.
(originally published 2011-11-01)
Most wireless hotspots use open, unencrypted wireless networks. Guests using these networks risk information disclosure and system compromise. Operators risk registration portal bypass and in the case of pay registration systems, potential sensitive data loss. Copyright concerns, including new legislation in the United Kingdom and a court case in Germany, may increase the pressure on providers to provide secure registration services.
I am proposing a solution that would have the encryption benefits provided by WPA/WPA2-Enterprise without the requirement for client authentication. This is possible using a novel (but RFC compliant) application of the existing EAP-TLS standard. The effect is similar to a web browser connecting to an HTTPS web site - the server certificate is validated, but a client certificate is only needed if the server is configured to require client authentication.
I currently have this working in a lab environment. Using a modified open source RADIUS server, I have been able to establish a secure wireless connection from both open source and commercial wireless supplicants. Without modification these clients require a client certificate to be configured, although this certificate will never be requested. With minor code modifications, I was able to connect with wpa_supplicant without a client certificate configured. Closed source supplicants would need to be modified by their respective vendors to make this a reality.
Please read my research on Open Secure Wireless attached, and let me know what you think.
(Originally published 2010-05-19)