Thanks Douglas for coming out to say a few words about the recent Facebook Autobot Outbreaks that affect huge numbers of Facebook apps and developers.
The details can be seen in the group Facebook HACK: Berlin. Extracting from it,
the developer guidelines are clear that our highest principle is a great user experience. our automated systems look for apps that are publishing stories that are hidden, blocked or marked as spam. if too many users do this, we consider the app “spammy” and take action.
We can aslo get more information on why this happens from the latest Facebook developer blog, Reducing spam on News Feed and Profile,
Over the past year, we’ve worked hard to improve our automated systems that catch spam and malicious behavior on Platform. These systems allowed us to cut spam on Platform by 95 percent in 2010, greatly increasing user satisfaction and trust with apps on Facebook. We’ve been getting a lot of user feedback recently, spiking significantly over the past week, about the amount of app spam on their feeds and walls. To ensure that users have a positive experience and developers can build in a healthy ecosystem, we run automated screens that detect apps receiving large amounts of negative user feedback. Recently, we have updated our enforcement systems to identify and reduce unwanted user-to-user posts created by apps. An example of these unwanted posts is when a user posts content to a friend’s Wall using an app and the friend removes the post, marks the post as spam or blocks the app.
We encourage you to proactively review user posts and make sure that they provide engaging and welcome content. In addition, we will be rolling out new App Insights in a few weeks that will let you better monitor negative user feedback (e.g., posts marked as spam and stream story hides). For examples and explanations of user feedback click here.
It is really surprising to see that before we know about that and get the new App Insights, our well-received apps are banned!
A strong and hot discussion is going on in the Facebook Developer Forum too. One of the “you shouldn’t miss it” thread is WARNING! HONEST application with 8 million users were just banned!.
An extract of my feedback (post #53 in the above thread) is given below,
- if we are to review how well the app perform and from it to get insights of whether our apps will get banned, I would expect the insights to provide us REAL-TIME data/stats. My reason is simple, if I do a code change / feature enhancement that I think it is great… but finally it turns out to be “negative impact”, there is no point to get the figures updated only after 2 days… as within 2 days, my app is likely to get banned already
** I believe there are many many cases where in fact before insight data is available for new apps, the news apps are banned already. So it seems to me that it is a unfair treatment that we cannot even see what is going on. Another way is that the insight data should be made available even after the app is banned. In fact, can we have a “forced sand-box mode”… for banned app, it can be put into this mode. As a result, only the admin/developer/tester can use this app. Then they review the insights, get it fixed/revised and then submit to facebook for review?
- I would expect certain “guideline” to be published. OK, from Douglas’s feedback to my comments, I understand that the feedback is taken in a relative manner. But that does not help.
For example, I include a wallpost feature, and it turns out that 99% of the users love it and 1% always mark the wall post as spam (even I well informed them my app will make the wallpost to his wall and the content is manually generated by him). So, does this 99% good out-weights the 1% bad!?? OK, there is no need to answer me a “yes”…. but how about the case of 90% good vs 10% bad?? As we all knows, we cannot make features that make ALL people happy. I believe Facebook know this better than us!! For every changes the platform made, there would be always BAD comments coming out!! For example, the wiki is shutdown….. I think it is good, but there are people saying it is bad!
- So, we need a certain guideline so that we can review and make judgement on whether we should implement a features or not. Other, seeing a 90% good feedback vs 10% bad feedback make no sense to me. As I may think that the feature is good and most people love it… but in turns, facebook think 10% bad feedback is no good and banned my app. In fact, it seems to me that the big apps are banned because of this (mine is one of the victims!)
Besides, what would happen if first 10 users are actually my competitors’s testers? they give me bad rating and negative feedback so as to prevent me from entering the market of a certain kind of app.
- my another suggestion is :
to my experience, most of the spamming apps are in fact making us of the comm channels in a programmatic way, e.g. creating events, publish wall posts, tagging friends, inviting people… the best ways to block them is to remove the programmatic interface and provide us a unified way of using the comm channels. E.g. all the comm interface has to be done via standard dialog (e.g. fb.ui in th js sdk, in which users know what they are doing)
What do you think?