Skip to main content

How To: Reproduce Cookie Violations

 

 

This how-to describes how it is best to reproduce cookie violations, as this is often a very confusing topic. First of all it would be good to know, how exactly our crawler works in this case: 

How Nimbusec visits websites

The crawler of Nimbusec is technically based on a headless chrome browser. But it is configured and extended in a way, that it is very hard for website fingerprinting scripts, to detect if this is a BOT (as it is) or a usual visitor. 

EVERY URL IS VISITED FROM A NEW, CLEAN BROWSER WINDOW IN INCOGNITO MODE. NO COOKIES ARE ENABLED, NO PLUGINS OR AD BLOCKER INSTALLED!

Step by step:

  1. The crawler receives a base URL to crawl. That can be "http://example.com" or also "http://example.com/foo". 
  2. The base URL will be visited as unique user and stored. This one is very important: EVERY URL VISITED, IS COMPARABLE TO A NEW, CLEAN BROWSER WINDOW, without any session cookies or anything else. That means that every page is visited like: 
    1. closing the browser (if any is open currently)
    2. open a new browser in incognito mode
    3. type the whole URL in the address bar and visit it
  3. Stored content will be analysed
  4. All links will be parsed
    1. external, different to base domain, will be scanned just once for malware and reputation to make sure no harm comes from an outgoing link at the time of scanning
    2. internal links, domain stays the same, will be stored as well for further analysis, and links will be followed
  5. Stored content will be analyzed in different flavors - e.g. compliance (including cookies)

How to check this manually

We strongly recommend to do this from within a sandbox, which can be reset to a clean state before proceeding with a new project. This will not only prevent you from being hacked because a website might be attacking, but also working as close to our systems, and most pre-installed user systems as it gets. 

Therefore a virtual machine with a linux would be the best solution. 

Also you need a browser like Chromium, freshly installed, without any cookies.

Requirements
  • Virtual Machine with Linux installed
  • Browser (e.g. Chromium) without
    • Ad Blocker
    • Configuration to block anything out of the box
  • (optional) a VPN tunnel to an Europe location as our scan service takes place from Europe

 

open a browser in incognito mode

  • open a browser in incognito mode

make sure that blocking mechanisms are deactivated

  • make sure that blocking mechanisms are deactivated

open web developer tools

  • open web developer tools

select the cookie section of the inspection tools

browse the website to check their cookies

  • browse the website to check its cookies

 

Why are there still different results

It may occur that altough all settings are correct and you browse very private to the website, the results are different. We've seen this before when we didn't get the "mf_**" cookie set, because our artificial visitor does not use a mouse, so the tracker did not set the cookie. 

A reason why cookies might look different or aren't set at all, might be the location from where the website is visited. Our crawler performs the compliance scans explicitly from Europe countries. 

That said, if you visit from another country, you might get different results though. The best advice here would be as follows: 

  • use a VPN connection to simulate browsing from a different region 
  • set the language and country settings of your OS and browser to the VPN location as well

This is of course high profile, and more to configure to reproduce the issue, but it's often worth to verify the results if needed.