Transparency Report

TRANSPARENCY REPORT 2023

Intro

fuqpremium.com is an aggregator platform specifically designed for adult content. This means on our platform you will find links to millions of videos on other (popular) adult websites. Via our platform adult websites share thumbnails, titles, URLs of pornographic material and other metadata (all together hereafter referred to as “item(s)”). It is not possible to watch the content of a video on our platform, only on the website that is linked to.

Our purpose is to help our visitors discover adult content that is engaging to them, in a safe and responsible manner. This means the items that are shared via our platform have to be legally obtained and shared; the context of an item must be in line with (Dutch) law and anyone participating should consent to the sharing of the item. Additionally, we expect everyone who uses our platform to act in line with our Acceptable Content Policy.

To keep our platform in line with our purpose, we do our utmost best to monitor all the items that are shared via our platform every day. We do this by, for example, automated content moderation, additionally we also receive notifications from authorities and website visitors to check/remove certain items. This report gives an overview of the content moderation we have applied September 1st 2023 to December 31st 2023 for our platform fuqpremium.com.

This report is based on the transparency requirements of the Digital Services Act. We have taken the year 2023 to implement all the requirements. As a result, it has not been possible to include all the content moderation applied in the year 2023 in this report.


Authorities

We have received 0 order(s) from European authorities.


Notice-and-action requests

We have received 74 notice-and-action requests through our Notice-and-Action Policy. This includes requests with regard to procedure 2 and 3 of our Notice-and-Action Policy; GDPR-requests (procedure 1 of our Notice-and-Action Policy) are not included in this section.1 The median time it took to complete the procedure(s) is 50 hours and never the decision was made by automated means. 0 notice-and-action requests were filed by trusted flaggers.


Violation of Dutch/European law

CSAM:


Non-consensual (recorded/distributed):


Animals being (sexually) involved:


(Solicitation of) hatred towards e.g., minorities, race or religion:


Breach of owner copyright and/or similar rights:


Digital Millennium Copyright Act-requests2:


Violation of our Acceptable Content Policy

Feces, vomit, blood:


Drunk or intoxicated:


Incest:


Other:


Content moderation (including automated content moderation)

In addition to reports from authorities and website visitors, we also monitor on our own initiative. This is done both by automated means as well as manually.


Manual checks

Our moderation team performs daily checks whether items under categories with a higher risk of containing illegal content contain items that infringes (Dutch) law or our Acceptable Content Policy. Based on manual checks we have removed the following content:


Violation of Dutch/European law
CSAM 77
Non-consensual (recorded/distributed) 162
Animals being (sexually) involved 17
(Solicitation of) hatred towards e.g., minorities, race or religion 2

Violation of our Acceptable Content Policy
Feces, vomit, blood 42
Drunk or intoxicated 13
Incest 5,833
Other 4,221

Content Guard (automated)

All items that are placed onto our platform are accompanied by a description and title. Before items become visible on our platform, the Content Guard checks whether the items contain terms that are on our banned terms list (e.g., “drunk”). If that is the case the placement is blocked. In case a new term is added to the Content Guard and items that have already been placed onto our platform contain the new term, those items will be removed immediately. This is a fully automated process. This means that we don’t check whether the banned term matches the context of the video. Therefore, it is possible that items are blocked while the context of the linked video is in line with the (Dutch) law and/or our Acceptable Content Policy. As a security measure, we continuously update/review the list to, for example, keep track of changing circumstances in the industry.

The Content Guard prevented 514,475 items from being placed onto our platform or removed from our platform. Since the Content Guard doesn’t analyze the content itself, just the textual part of the item, no distinction can be made between different types of illegal acts and if an illegal act is performed at all. For the same reason it is not possible to give an indication of the accuracy or rate of error of the tool.


Media Guard (automated)

The Media Guard makes use of hashing technology to block certain thumbnails that have been blocked in the past, because we believed they violated the law, our Terms of Use or Acceptable Content Policy. Before a new item can be placed onto our platform, the Media Guard creates a hash of the thumbnail and checks if hashes are matching. In case a hash is recognized, the corresponding item is blocked. This fully automated process prevents illegal items or items that are not in line with our Acceptable Content Policy from being placed onto our platform again. In case the hashing technology is updated with a new hash and thumbnails that have already been placed onto our platform have the same hash, those thumbnails (and the rest of the items) will be removed immediately. Security measures are in place to prevent the tool from being misused.

The number of items prevented from being placed onto our platform and/or existing items removed from our platform by the Media Guard prevented is 15,661. Since the Media Guard doesn’t contain information about the reason a blocked item has been removed, no distinction can be made between different types of illegal acts.


Instant Image Identifier of Offlimits (automated)

The Instant Image Identifier is a service of Offlimits (formerly known as EOKM). This tool continuously compares the items that are (being) placed onto our platform to the database of Offlimits. This database contains hashes of CSAM or material closely related to CSAM. When there is a match, the placement is blocked or items that have already been placed onto on our platform will be removed immediately. The accuracy of the service depends on the quality of the databases of hashes supplied to the Instant Image Identifier.

The Instant Image Identifier removed 1,646 items from our platform or prevented it from being uploaded.


Complaints

In 2023 we received 0 complaint(s) through our internal complaint-handling system.


Out-of-court disputes

0 disputes were submitted to an out-of-court settlement body.


Suspensions

0 time(s) we have suspended users from our platform.


1 If a request both constitutes a violation of the law and/or our Acceptable Content Policy and a GDPR-request, we process the request under either procedure 2 or 3. 2 These are copyright-infringement-notifications based on US-law.
3 Based on US-law a counter-notice automatically means the item is placed back on our platform except if the initial requester files a lawsuit.