Community Central

Search for duplicate files

Forum page

120,809pages on
this wiki

This Forum has been archived

Visit the new Forums
Forums: Index Community Central Forum Search for duplicate files
Wikia's forums are a place for the community to help other members.
To contact staff directly or to report bugs, please use Special:Contact.
Note: This topic has been unedited for 2072 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response.

Hello, I'm just wondering if there's a way to list all of the duplicate files on a wiki. I know there's Special:FileDuplicateSearch, but that only looks at one file at a time. Is there a special page (or an extension, or anything) that will churn out a list of all of the duplicates? Thanks, Cook Me Plox

The API: /api.php?action=query&generator=allimages&prop=duplicatefiles --Pcj (TC) 02:50, July 1, 2010 (UTC)
Sorry, but I'm not really sure what I do with that. I got this, but I don't know how to use it. Cook Me Plox 06:56, July 1, 2010 (UTC)

You take the url [1] and see the gaifrom="19. Poneytail spikey.png" at the top? you keep adding that to the url to get to the next page. remember that you have to URLencode (google it for a table to do it by hand) certain things, and spaces turn into underscores. so that first one would become [2] and so on and so on. If there are any dupes, you will see it. --Uberfuzzy@Wikia 07:27, July 1, 2010 (UTC)

Thanks for that. But I'm still not seeing where the dupes are. Is it in the pageid? Sorry I'm not seeing this. Cook Me Plox 20:13, July 1, 2010 (UTC)
Probably a better URL with an example is where at least for me it shows up in the first result (notably there are some oddities in the file names of the first few that might merit other further attention). --Pcj (TC) 20:26, July 1, 2010 (UTC)
Okay, I see the first duplicate file among all the other things. But how do I get to the next page? Adding, for isntance, gaifrom="(Swamp) Snake hide.png", it doesn't start with that one. I'm rather confused. Sorry I'm not grasping this :/ Cook Me Plox 20:49, July 1, 2010 (UTC)
The continuation URL for the previous one is this. You really should probably contact Wikia about the first few entries on there, as there is some oddity going on with images with double colons between them and their namespace as well as other similar weirdness. Also see this URL to show more duplicate images for each image. --Pcj (TC) 21:06, July 1, 2010 (UTC)
EDIT: The duplicate first few files (especially those without extensions) appear to be "uploaded videos" which are just links to other sites. I would say this is still a bug and should still be reported but it doesn't seem to be exclusive to your site - it occurs on WoWWiki too. --Pcj (TC) 21:11, July 1, 2010 (UTC)

Is there any way to automate this process to only output duplicated files? Duskey(talk) 19:03, August 25, 2010 (UTC)

Not really, you could use a regular expression to eliminate the non-duplicated. --Pcj (TC) 19:06, August 25, 2010 (UTC)
It might be simplest to use a google search for the duplicate file notice in your files. --◄mendel► 20:37, August 25, 2010 (UTC)

I have written some JavaScript to list these for you (by AJAX). First, put this code in your Special:Mypage/monaco.js (or Special:Mypage/global.js):

dil = new Array();
function findDupImages(gf) {
output = "";
url = "/api.php?action=query&generator=allimages&prop=duplicatefiles&gailimit=500&format=json";
if (gf) url += "&gaifrom=" + gf;
$.getJSON(url,function (data) {
if (data.query) {
pages = data.query.pages;
for (pageID in pages) {
dils = ","+dil.join();
if (dils.indexOf(","+pages[pageID].title) == -1 && pages[pageID].title.indexOf("File::") == -1 && pages[pageID].duplicatefiles) {
output += "<h3><a href='/" + pages[pageID].title + "'>"+pages[pageID].title+"</a></h3>\n<ul>\n";
for (x=0;x<pages[pageID].duplicatefiles.length;x++) {
output += "<li><a href='/File:" + pages[pageID].duplicatefiles[x].name + "'>File:"+pages[pageID].duplicatefiles[x].name+"</a></li>\n";
dil.push("File:"+pages[pageID].duplicatefiles[x].name.replace(/_/g," "));
output += "</ul>\n\n"
if (data["query-continue"]) setTimeout("findDupImages('"+data["query-continue"].allimages.gaifrom+"');",5000);
$(function () { if ($("#mw-dupimages").length) findDupImages(); });

Then create a page with this content:

<div id="mw-dupimages"></div>

Then you can browse to that page and it will create a list of duplicate images for you (every 5 seconds it will add more until it exhausts the list). Please let me know if you have any questions. --Pcj (TC) 21:45, August 26, 2010 (UTC)

Just tested it out and it seems to work, thanks pcj. Duskey(talk) 13:42, August 27, 2010 (UTC)
What would the requirements be to have this function on a non-Wikia wiki? I'm an admin on the Official Team Fortress Wiki and we definitely need something like this so we can get all the dupes in one place. I followed the instructions but It did not work (which I'm assuming is due to our current setup). Any help/suggestions? surlyanduncouth (talk) 14:13, August 29, 2010 (UTC)
You'll need to install jQuery on your wiki and change some of the URLs. --Pcj (TC) 17:33, August 29, 2010 (UTC)
Ah, that's probably not possible. Thanks anyway! surlyanduncouth (talk) 14:57, August 30, 2010 (UTC)
It is possible if you can edit the wiki's JS. --Pcj (TC) 15:12, August 30, 2010 (UTC)

Around Wikia's network

Random Wiki