Integrating COTS Search Engines into Eclipse: Google Desktop Case Study

Author(s):  
Denys Poshyvanyk ◽  
Maksym Petrenko ◽  
Andrian Marcus
Keyword(s):  
Kybernetes ◽  
2019 ◽  
Vol 48 (6) ◽  
pp. 1355-1372 ◽  
Author(s):  
Ying Huang ◽  
Nu-nu Wang ◽  
Hongyu Zhang ◽  
Jianqiang Wang

Purpose The purpose of this paper is to propose a model for product recommendation to improve the accuracy of recommendation based on the current search engines used in e-commerce platforms like Tmall.com. Design/methodology/approach First, the proposed model comprehensively considers price, trust and online reviews, which all represent critical factors in consumers’ purchasing decisions. Second, the model introduces the quantization methods for these criteria incorporating fuzzy theory. Third, the model uses a distance measure between two single valued neutrosophic sets based on the prioritized average operator to consolidate the influences of positive, neutral and negative comments. Finally, the model uses multi-criteria decision-making methods to integrate the influences of price, trust and online reviews on purchasing decisions to generate recommendations. Findings To demonstrate the feasibility and efficiency of the proposed model, a case study is conducted based on Tmall.com. The results of case study indicate that the recommendations of our model perform better than those of current search engines of Tmall.com. The proposed model can significantly improve the accuracy of product recommendations based on search engines. Originality/value The product recommendation method can meet the critical challenge from the search engines on e-commerce platforms. In addition, the proposed method could be used in practice to develop a new application for e-commerce platforms.


Author(s):  
Natasha Tusikov

This chapter explains how the transnational regime uses search engines (especially Google) and domain name registrars (specifically GoDaddy) to throttle access to infringing sites. It traces efforts by the U.S. and U.K. governments, along with rights holders, to pressure Google and GoDaddy into adopting the non-binding agreements. It then presents two case studies. The first discusses search engines’ regulation of search results linking to infringing sites and a non-binding agreement struck among search engines (Google, Yahoo, and Microsoft) at the behest of the U.K. government. The second case study examines GoDaddy’s efforts to disable so-called illegal online pharmacies that operate in violation of U.S. federal and state laws. The chapter concludes that Internet firms’ practice of using chokepoints to dissuade access to targeted websites is highly problematic as legitimate websites are mistakenly targeted and sanctioned. Automated enforcement programs exacerbate this problem as they significantly increase the scale and speed of rights holders’ enforcement efforts without a corresponding increase in oversight.


2020 ◽  
pp. 137-162
Author(s):  
Sarah Esther Lageson

Utilizing a case study of the online mugshot extortion industry, this chapter discusses efforts by activists determined to take back their identities and protect those who are afraid to try. The decentralized production of criminal records and the intrusion of private websites that spread these records have created such complicated systems of data that some people are more concerned with creating even more “noise” within surveillance systems rather than conceptualizing or asserting their own privacy rights. These activists argue that the burden of reforming digital punishment must also be placed on technology companies and search engines, which currently avoid responsibility for disseminating mugshots and driving web traffic to shoddy criminal records.


First Monday ◽  
2008 ◽  
Author(s):  
Mark Meiss ◽  
Filippo Menczer

Understanding the qualitative differences between the sets of results from different search engines can be a difficult task. How many links must you follow from each list before you can reach a conclusion? We describe a user interface that allows users to quickly identify the most significant differences in content between two lists of Web pages. We have implemented this interface in CenSEARCHip, a system for comparing the effects of censorship policies on search engines.


Author(s):  
Agus Setiawan ◽  
Zulkifli Harahap ◽  
Dedy Syamsuar ◽  
Yesi Novaria Kunang

This research is a case study of Search Engine Optimization (SEO) in Palembang Polytechnic of Tourism website. The main objective of this research is to establish a plan for SEO in Palembang Polytechnic of Tourism (http://poltekpar-palembang.ac.id/) and to improve online visibility and ranking position in search engines (Google). It aims to bring in more international traffic and students to visit the website. SEO is a digital marketing technique to increase web accessibility. In the globalization world, people use search engines, such as Google, to know or find out more about various topics quickly and visually. Through a bibliographic review and qualitative analysis, the research focuses on the understanding of what SEO is and its implementation for the Palembang Polytechnic of Tourism website. The results show that the most important thing in making SEO plans is to increase visibility and branding on search engines (Google). SEO is done by developing website content and setting keywords as backlinks.


2002 ◽  
Vol 63 (4) ◽  
pp. 354-365 ◽  
Author(s):  
Susan Augustine ◽  
Courtney Greene

Have Internet search engines influenced the way students search library Web pages? The results of this usability study reveal that students consistently and frequently use the library Web site’s internal search engine to find information rather than navigating through pages. If students are searching rather than navigating, library Web page designers must make metadata and powerful search engines priorities. The study also shows that students have difficulty interpreting library terminology, experience confusion discerning difference amongst library resources, and prefer to seek human assistance when encountering problems online. These findings imply that library Web sites have not alleviated some of the basic and long-range problems that have challenged librarians in the past.


Author(s):  
Gonçalo Jorge Morais da Costa ◽  
Nuno Sotero Alves da Silva ◽  
Piotr Pawlak

<P>“Informational Society” is unceasingly discussed by all societies’ quadrants. Nevertheless, in spite of illustrating the most recent progress of western societies the complexity to characterize it is well-known. In such societal evolution the “leading role” goes to information, as a polymorphic phenomenon and a polysemantic concept. Given such claim and the need for a multidimensional approach, the overall amount of information available online has reached an unparalleled level, and consequently search engines become exceptionally important. Search engines main stream literature has been debating the following perspectives: technology, user level of expertise and confidence, organizational impact, and just recently power issues. However, the trade-off between informational fluxes versus control has been disregarded. Chapter 27 discusses such gap, and for that, the overall structure of the chapter is: information, search engines, control and its dimensions, and exploit Google as a case study.</P> <P>&nbsp;</P>


Author(s):  
José Antonio Robles-Flores ◽  
Gregory Schymik ◽  
Julie Smith-David ◽  
Robert St. Louis

Web search engines typically retrieve a large number of web pages and overload business analysts with irrelevant information. One approach that has been proposed for overcoming some of these problems is automated Question Answering (QA). This paper describes a case study that was designed to determine the efficacy of QA systems for generating answers to original, fusion, list questions (questions that have not previously been asked and answered, questions for which the answer cannot be found on a single web site, and questions for which the answer is a list of items). Results indicate that QA algorithms are not very good at producing complete answer lists and that searchers are not very good at constructing answer lists from snippets. These findings indicate a need for QA research to focus on crowd sourcing answer lists and improving output format.


2012 ◽  
Vol 17 (8) ◽  
pp. 1593-1603 ◽  
Author(s):  
Tatiana Gossen ◽  
Julia Hempel ◽  
Andreas Nürnberger
Keyword(s):  

First Monday ◽  
2006 ◽  
Author(s):  
Judit Bar-Ilan
Keyword(s):  

Andrei Broder, the well-known Internet researcher does not have a home page of his own. This complicates finding information about him, especially since during the last ten years he switched several employers. Especially of interest is the page research.compaq.com/people/Andrei_Broder/bio.html, which almost seven years after Andrei Broder left Compaq, still appears among the top-ten results displayed by Google for the query Andrei Broder as of June 2006. The title of this page is “No such user” and its content is “Sorry, Andrei Broder is no longer working in Compaq Corporate Research.” The case becomes even more interesting, since the actual page and the whole site research.compaq.com are inaccessible at least since March 2006, and the Google’s cached copy is from December 2005! In this paper we investigate the placement of this page at various search engines over the years, and describe searchers’ efforts to find information about the job title and business address of Andrei Broder as of May 2005, when he was still working at IBM.


Sign in / Sign up

Export Citation Format

Share Document