蚊子有什么用| 清明节是什么时候| 苹果越狱是什么意思啊| 瘟疫是什么意思| 92年是什么生肖| 躺平是什么意思| 老是头疼是什么原因| 西瓜跟什么不能一起吃| 喝酒过敏吃什么药| 中指戴戒指代表什么| 肺部疼痛什么原因| 大葱炒什么好吃| 身上经常痒是什么原因| 不负众望什么意思| 鸡属于什么类动物| 传教士是什么姿势| 珠胎暗结是什么意思| 品是什么意思| 打狂犬疫苗不能吃什么食物| 肝着是什么意思| 下嘴唇有痣代表什么| 阿戈美拉汀片是什么药| co是什么意思| gg是什么品牌| 老是睡不着觉是什么原因| 输液葡萄糖有什么作用| 失眠什么药最好| 为什么会得肺癌| 夏天喝什么汤好| 女娲姓什么| 章鱼的血是什么颜色| 2021年属什么| 雨字头的字有什么| 热鸡蛋滚脸有什么作用| 饺子是什么意思| 外科和内科有什么区别| 什么病会引起牙疼| h是什么牌子的衣服| 梅核气吃什么药好得快| 好好活着比什么都重要| 电信诈骗是什么意思| 花木兰是什么剧种| 愿闻其详什么意思| 吉人自有天相是什么意思| 过的第五笔是什么| 爬山需要准备什么东西| 草鱼吃什么草| 补钙吃什么| 反酸水是什么原因| 北芪煲汤加什么药材好| 优势卵泡是什么意思| 等闲识得东风面什么意思| 红色学士服是什么学位| 提刑官相当于现在什么官| suan是什么意思| 锐步是什么档次| 什么是阳痿| 山竹有什么功效和作用| 壬是什么意思| 小狗感冒症状是什么样的| 风是什么| 天生丽质什么意思| 正视是什么意思| 被舔下面是什么感觉| 吃什么hcg翻倍快| 沙僧是什么动物| 肺纹理增多什么意思| n1是什么意思| 乌豆和黑豆有什么区别| 卵泡刺激素是什么意思| 复辟什么意思| 效价是什么意思| 系统b超主要检查什么| 东北人喜欢吃什么菜| 阿尔山在内蒙古什么地方| 唇炎去药店买什么药| 神经酰胺是什么| 两肺散在小结节是什么意思| 送朋友什么礼物好| 什么叫萎缩性胃炎| 杏仁有什么营养| 泌尿系统由什么组成| 茂密的枝叶像什么| 主理人是什么意思| 磨牙齿有什么方法可以治| 去痛片又叫什么名| 水浒传主要讲了什么| 细菌性阴道炎用什么药效果好| 部级干部是什么级别| 2月7号是什么星座| 胸部胀痛是什么原因| 未免是什么意思| 两个o型血能生出什么血型的孩子| 皮肤黑适合穿什么颜色的衣服| 两个虎念什么| aq是什么标准| 什么人容易得骨肿瘤| 突然头晕目眩是什么原因| 额窦炎吃什么药管用| 正常人的尿液是什么颜色| 办理身份证需要什么| 为什么牙齿会松动| 铁子是什么意思| 开眼镜店需要什么设备| 湿疹为什么要查肝功能| 死忠粉是什么意思| 孕酮低吃什么| 27岁属什么| 什么的高山填空| 三拜九叩是什么意思| 什么食物降胆固醇最好| jj是什么意思| 人间烟火什么意思| 血压高呕吐是什么征兆| 嘴唇暗红色是什么原因| 3月6日是什么星座| iabp医学上是什么意思| 腊月初八是什么星座| 气血不足看什么科室| 睡醒后嘴巴苦什么原因| 头晕吃什么药| 交织是什么意思| 中秋节送什么| 什么外之什么| 露营什么意思| 小指麻木是什么原因| 胃反流吃什么药| 包可以加什么偏旁| 金代表什么数字| diy什么意思| 赵字五行属什么| 四面楚歌是什么生肖| 枇杷不能和什么一起吃| 股骨头在什么位置| 痰多吃什么好化痰| 什么是冰种翡翠| 失眠是什么意思| 儿童淋巴结肿大挂什么科| 高考准考证有什么用| 94年属什么生肖| 石头记为什么叫红楼梦| 黄疸肝炎有什么症状| 爽字代表什么生肖| 卵黄囊回声是什么意思| 9527是什么意思| 银针白毫是什么茶| 化疗和放疗什么区别| 左胳膊发麻是什么原因| 三千烦恼丝什么意思| 阴虱是什么| pt是什么单位| 吃紫甘蓝有什么好处| 盆腔炎是什么| 为什么天上会下雨| 双性恋是什么意思| 拉屎的时候拉出血来是什么原因| 巳时属什么生肖| 肚子里有虫子会有什么症状| 九月初六是什么星座| 出虚汗是什么原因| 命根子是什么生肖| 怠工是什么意思| 梅菜在北方叫什么菜| 拔牙之后可以吃什么| 黄体酮是什么| 欧舒丹属于什么档次| 黄疸肝炎有什么症状| 变质是什么意思| 男士背心什么牌子好| 2岁什么都听懂但不说话| 87年兔是什么命| 牙齿松动了有什么办法能固齿吗| 为什么风团会在晚上爆发| 纳少是什么意思| 双子座和什么座最配| 比丘什么意思| 吃什么食物对胰腺好| 甲钴胺是治什么病的| 化疗病人吃什么好| 头皮痛是什么原因| 化生细胞有是什么意思| 口舌痣是什么意思| 缺钙吃什么补得最快| 有什么好吃的菜| 蟹爪兰用什么肥料最好| 心肌缺血做什么检查能查出来| 乙肝病毒表面抗原阳性是什么意思| 属虎男和什么属相最配| 最近老坏东西暗示什么| 女人梦见掉头发是什么征兆| 产假什么时候开始休| 80年属猴的是什么命| 陈百强属什么生肖| 饕餮长什么样子| 什么叫上门女婿| 什么是放疗| 闰6月有什么说法| 脐下三寸是什么地方| 腋下臭是什么原因| 雄鹰是什么意思| 老年人腿浮肿是什么原因引起的| 反哺是什么意思| 火加同念什么| 超五行属什么| 百合与什么搭配最好| 结婚五周年是什么婚| 落英缤纷是什么意思| 什么榴莲品种最好吃| 雨霖铃是什么意思| 病毒性疣是什么病| 女孩小名叫什么好| 龟苓膏是什么| 吊儿郎当是什么意思| 刘邦和项羽是什么关系| 情景剧是什么意思| 夫妻都是a型血孩子是什么血型| 醋蛋液主要治什么| 胃疼吃什么药好得最快最有效| 梦见自己小便是什么意思| 16岁属什么| 胪是什么意思| 在水一方什么意思| 头皮疼是什么原因| 阴虱是什么原因引起的| 拔罐什么时候拔最好| 蜗牛的天敌是什么| 肝右叶占位是什么意思| 左氧氟沙星有什么副作用| 6月5日是什么日| 七一年属什么生肖| 1866年属什么生肖| 下体瘙痒是什么原因| 什么是岩茶| ono是什么意思| 便秘吃什么能通便| 香奈儿是什么品牌| 决断是什么意思| 6月28日是什么日子| 什么是记忆棉| 骨折吃什么好得快| 1983属什么生肖| 梦见自己洗衣服是什么意思| 脚肿什么原因引起的| 澄面粉是什么面粉| 什么学海无涯苦作舟| 男人吃什么壮阳最快| 碳酸钠是什么| 女人喝甘草水有什么好处| 事不过三是什么意思| ala是什么氨基酸| 孕酮低跟什么有关系| 干咳吃什么药止咳效果好| 立碑有什么讲究和忌讳| baleno是什么牌子| 喉咙挂什么科室| 臭氧是什么东西| 阴囊湿疹长什么样图片| 什么手机好用| 人的肝脏在什么位置| 少将相当于地方什么级别| 来大姨妈吃什么水果| 急性乳腺炎是什么原因引起的| 什么是党的性质和宗旨的体现| 姜枣茶什么时间喝最好| 百度Jump to content

血脂高吃什么好

From Wikipedia, the free encyclopedia
百度 据记者了解,辽篮俱乐部昨日向CBA公司正式提出申诉,理由是在上一场比赛结束后,北京队外援杰克逊在新闻发布会上发表了一系列不恰当的言论,并且明显将矛头指向裁判,这一点违反了中国篮协及CBA公司的相关规定。

Information retrieval (IR) in computing and information science is the task of identifying and retrieving information system resources that are relevant to an information need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based on full-text or other content-based indexing. Information retrieval is the science[1] of searching for information in a document, searching for documents themselves, and also searching for the metadata that describes data, and for databases of texts, images or sounds.

Automated information retrieval systems are used to reduce what has been called information overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents. Web search engines are the most visible IR applications.

Overview

[edit]

An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevance.

An object is an entity that is represented by information in a content collection or database. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. This ranking of results is a key difference of information retrieval searching compared to database searching.[2]

Depending on the application the data objects may be, for example, text documents, images,[3] audio,[4] mind maps[5] or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata.

Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[6]

History

[edit]

there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute

— J. E. Holmstrom, 1948

The idea of using computers to search for relevant pieces of information was popularized in the article As We May Think by Vannevar Bush in 1945.[7] It would appear that Bush was inspired by patents for a 'statistical machine' – filed by Emanuel Goldberg in the 1920s and 1930s – that searched for documents stored on film.[8] The first description of a computer searching for information was described by Holmstrom in 1948,[9] detailing an early mention of the Univac computer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedy Desk Set. In the 1960s, the first large information retrieval research group was formed by Gerard Salton at Cornell. By the 1970s several different retrieval techniques had been shown to perform well on small text corpora such as the Cranfield collection (several thousand documents).[7] Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s.

In 1992, the US Department of Defense along with the National Institute of Standards and Technology (NIST), cosponsored the Text Retrieval Conference (TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods that scale to huge corpora. The introduction of web search engines has boosted the need for very large scale retrieval systems even further.

By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such as AltaVista (1995) and Yahoo! (1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding of Google, which introduced the PageRank algorithm,[10] using the web’s hyperlink structure to assess page importance and improve relevance ranking.

During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009, Microsoft launched Bing, introducing features that would later incorporate semantic web technologies through the development of its Satori knowledge base. Academic analysis[11] have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing.

A major leap occurred in 2018, when Google deployed BERT (Bidirectional Encoder Representations from Transformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.[12] BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.[13]

Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as the Text REtrieval Conference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchine Reading COmprehension) (2019)[14] became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.[15]

As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes: sparse, dense, and hybrid models. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals.[16] Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap.[17] Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.[18]

As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms.

Applications

[edit]

Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category):

General applications

[edit]

Domain-specific applications

[edit]

Other retrieval methods

[edit]

Methods/Techniques in which information retrieval techniques are employed include:

Model types

[edit]
Categorization of IR-models (translated from German entry, original source Dominik Kuropka)

In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model.

First dimension: mathematical basis

[edit]

Second dimension: properties of the model

[edit]
  • Models without term-interdependencies treat different terms/words as independent. This fact is usually represented in vector space models by the orthogonality assumption of term vectors or in probabilistic models by an independency assumption for term variables.
  • Models with immanent term interdependencies allow a representation of interdependencies between terms. However the degree of the interdependency between two terms is defined by the model itself. It is usually directly or indirectly derived (e.g. by dimensional reduction) from the co-occurrence of those terms in the whole set of documents.
  • Models with transcendent term interdependencies allow a representation of interdependencies between terms, but they do not allege how the interdependency between two terms is defined. They rely on an external source for the degree of interdependency between two terms. (For example, a human or sophisticated algorithms.)

Third Dimension: representational approach-based classification

[edit]

In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.[19]

  • Sparse models utilize interpretable, term-based representations and typically rely on inverted index structures. Classical methods such as TF-IDF and BM25 fall under this category, along with more recent learned sparse models that integrate neural architectures while retaining sparsity.[20]
  • Dense models represent queries and documents as continuous vectors using deep learning models, typically transformer-based encoders. These models enable semantic similarity matching beyond exact term overlap and are used in tasks involving semantic search and question answering.[21]
  • Hybrid models aim to combine the strengths of both approaches, integrating lexical (tokens) and semantic signals through score fusion, late interaction, or multi-stage ranking pipelines.[22]

This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.[23][20]

Performance and correctness measures

[edit]

The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed for Boolean retrieval[clarification needed] or top-k retrieval, include precision and recall. All measures assume a ground truth notion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may be ill-posed and there may be different shades of relevance.

Libraries for searching and indexing

[edit]

Timeline

[edit]
  • Before the 1900s
    1801: Joseph Marie Jacquard invents the Jacquard loom, the first machine to use punched cards to control a sequence of operations.
    1880s: Herman Hollerith invents an electro-mechanical data tabulator using punch cards as a machine readable medium.
    1890 Hollerith cards, keypunches and tabulators used to process the 1890 US census data.
  • 1920s–1930s
    Emanuel Goldberg submits patents for his "Statistical Machine", a document search engine that used photoelectric cells and pattern recognition to search the metadata on rolls of microfilmed documents.
  • 1940s–1950s
    late 1940s: The US military confronted problems of indexing and retrieval of wartime scientific research documents captured from Germans.
    1945: Vannevar Bush's As We May Think appeared in Atlantic Monthly.
    1947: Hans Peter Luhn (research engineer at IBM since 1941) began work on a mechanized punch card-based system for searching chemical compounds.
    1950s: Growing concern in the US for a "science gap" with the USSR motivated, encouraged funding and provided a backdrop for mechanized literature searching systems (Allen Kent et al.) and the invention of the citation index by Eugene Garfield.
    1950: The term "information retrieval" was coined by Calvin Mooers.[24]
    1951: Philip Bagley conducted the earliest experiment in computerized document retrieval in a master thesis at MIT.[25]
    1955: Allen Kent joined Case Western Reserve University, and eventually became associate director of the Center for Documentation and Communications Research. That same year, Kent and colleagues published a paper in American Documentation describing the precision and recall measures as well as detailing a proposed "framework" for evaluating an IR system which included statistical sampling methods for determining the number of relevant documents not retrieved.[26]
    1958: International Conference on Scientific Information Washington DC included consideration of IR systems as a solution to problems identified. See: Proceedings of the International Conference on Scientific Information, 1958 (National Academy of Sciences, Washington, DC, 1959)
    1959: Hans Peter Luhn published "Auto-encoding of documents for information retrieval".
  • 1960s:
    early 1960s: Gerard Salton began work on IR at Harvard, later moved to Cornell.
    1960: Melvin Earl Maron and John Lary Kuhns[27] published "On relevance, probabilistic indexing, and information retrieval" in the Journal of the ACM 7(3):216–244, July 1960.
    1962:
    • Cyril W. Cleverdon published early findings of the Cranfield studies, developing a model for IR system evaluation. See: Cyril W. Cleverdon, "Report on the Testing and Analysis of an Investigation into the Comparative Efficiency of Indexing Systems". Cranfield Collection of Aeronautics, Cranfield, England, 1962.
    • Kent published Information Analysis and Retrieval.
    1963:
    • Weinberg report "Science, Government and Information" gave a full articulation of the idea of a "crisis of scientific information". The report was named after Dr. Alvin Weinberg.
    • Joseph Becker and Robert M. Hayes published text on information retrieval. Becker, Joseph; Hayes, Robert Mayo. Information storage and retrieval: tools, elements, theories. New York, Wiley (1963).
    1964:
    • Karen Sp?rck Jones finished her thesis at Cambridge, Synonymy and Semantic Classification, and continued work on computational linguistics as it applies to IR.
    • The National Bureau of Standards sponsored a symposium titled "Statistical Association Methods for Mechanized Documentation". Several highly significant papers, including G. Salton's first published reference (we believe) to the SMART system.
    mid-1960s:
    • National Library of Medicine developed MEDLARS Medical Literature Analysis and Retrieval System, the first major machine-readable database and batch-retrieval system.
    • Project Intrex at MIT.
    1965: J. C. R. Licklider published Libraries of the Future.
    1966: Don Swanson was involved in studies at University of Chicago on Requirements for Future Catalogs.
    late 1960s: F. Wilfrid Lancaster completed evaluation studies of the MEDLARS system and published the first edition of his text on information retrieval.
    1968:
    • Gerard Salton published Automatic Information Organization and Retrieval.
    • John W. Sammon, Jr.'s RADC Tech report "Some Mathematics of Information Storage and Retrieval..." outlined the vector model.
    1969: Sammon's "A nonlinear mapping for data structure analysis Archived 2025-08-14 at the Wayback Machine" (IEEE Transactions on Computers) was the first proposal for visualization interface to an IR system.
  • 1970s
    early 1970s:
    • First online systems—NLM's AIM-TWX, MEDLINE; Lockheed's Dialog; SDC's ORBIT.
    • Theodor Nelson promoting concept of hypertext, published Computer Lib/Dream Machines.
    1971: Nicholas Jardine and Cornelis J. van Rijsbergen published "The use of hierarchic clustering in information retrieval", which articulated the "cluster hypothesis".[28]
    1975: Three highly influential publications by Salton fully articulated his vector processing framework and term discrimination model:
    • A Theory of Indexing (Society for Industrial and Applied Mathematics)
    • A Theory of Term Importance in Automatic Text Analysis (JASIS v. 26)
    • A Vector Space Model for Automatic Indexing (CACM 18:11)
    1978: The First ACM SIGIR conference.
    1979: C. J. van Rijsbergen published Information Retrieval (Butterworths). Heavy emphasis on probabilistic models.
    1979: Tamas Doszkocs implemented the CITE natural language user interface for MEDLINE at the National Library of Medicine. The CITE system supported free form query input, ranked output and relevance feedback.[29]
  • 1980s
    1980: First international ACM SIGIR conference, joint with British Computer Society IR group in Cambridge.
    1982: Nicholas J. Belkin, Robert N. Oddy, and Helen M. Brooks proposed the ASK (Anomalous State of Knowledge) viewpoint for information retrieval. This was an important concept, though their automated analysis tool proved ultimately disappointing.
    1983: Salton (and Michael J. McGill) published Introduction to Modern Information Retrieval (McGraw-Hill), with heavy emphasis on vector models.
    1985: David Blair and Bill Maron publish: An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System
    mid-1980s: Efforts to develop end-user versions of commercial IR systems.
    1985–1993: Key papers on and experimental systems for visualization interfaces.
    Work by Donald B. Crouch, Robert R. Korfhage, Matthew Chalmers, Anselm Spoerri and others.
    1989: First World Wide Web proposals by Tim Berners-Lee at CERN.
  • 1990s
    1992: First TREC conference.
    1997: Publication of Korfhage's Information Storage and Retrieval[30] with emphasis on visualization and multi-reference point systems.
    1998: Google is founded by Larry Page and Sergey Brin. It introduces the PageRank algorithm, which evaluates the importance of web pages based on hyperlink structure.[31]
    1999: Publication of Ricardo Baeza-Yates and Berthier Ribeiro-Neto's Modern Information Retrieval by Addison Wesley, the first book that attempts to cover all IR.
  • 2000s
    2001: Wikipedia launches as a free, collaborative online encyclopedia. It quickly becomes a major resource for information retrieval, particularly for natural language processing and semantic search benchmarks.[32]
    2009: Microsoft launches Bing, introducing features such as related searches, semantic suggestions, and later incorporating deep learning techniques into its ranking algorithms.[33]
  • 2010s
    2013: Google’s Hummingbird algorithm goes live, marking a shift from keyword matching toward understanding query intent and semantic context in search queries.[34]
    2018: Google AI researchers release BERT (Bidirectional Encoder Representations from Transformers), enabling deep bidirectional understanding of language and improving document ranking and query understanding in IR.[35]
    2019: Microsoft introduces MS MARCO (Microsoft MAchine Reading COmprehension), a large-scale dataset designed for training and evaluating machine reading and passage ranking models.[36]
  • 2020s
    2020: The ColBERT (Contextualized Late Interaction over BERT) model, designed for efficient passage retrieval using contextualized embeddings, was introduced at SIGIR 2020.[37][38]
    2021: SPLADE is introduced at SIGIR 2021. It’s a sparse neural retrieval model that balances lexical and semantic features using masked language modeling and sparsity regularization.[39]
    2022: The BEIR benchmark is released to evaluate zero-shot IR across 18 datasets covering diverse tasks. It standardizes comparisons between dense, sparse, and hybrid IR models.[40]

Major conferences

[edit]

Awards in the field

[edit]

See also

[edit]

References

[edit]
  1. ^ Luk, R. W. P. (2022). "Why is information retrieval a scientific discipline?". Foundations of Science. 27 (2): 427–453. doi:10.1007/s10699-020-09685-x. hdl:10397/94873. S2CID 220506422.
  2. ^ Jansen, B. J. and Rieh, S. (2010) The Seventeen Theoretical Constructs of Information Searching and Information Retrieval Archived 2025-08-14 at the Wayback Machine. Journal of the American Society for Information Sciences and Technology. 61(8), 1517–1534.
  3. ^ Goodrum, Abby A. (2000). "Image Information Retrieval: An Overview of Current Research". Informing Science. 3 (2).
  4. ^ Foote, Jonathan (1999). "An overview of audio information retrieval". Multimedia Systems. 7: 2–10. CiteSeerX 10.1.1.39.6339. doi:10.1007/s005300050106. S2CID 2000641.
  5. ^ Beel, J?ran; Gipp, Bela; Stiller, Jan-Olaf (2009). Information Retrieval On Mind Maps - What Could It Be Good For?. Proceedings of the 5th International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom'09). Washington, DC: IEEE. Archived from the original on 2025-08-14. Retrieved 2025-08-14.
  6. ^ Frakes, William B.; Baeza-Yates, Ricardo (1992). Information Retrieval Data Structures & Algorithms. Prentice-Hall, Inc. ISBN 978-0-13-463837-9. Archived from the original on 2025-08-14.
  7. ^ a b Singhal, Amit (2001). "Modern Information Retrieval: A Brief Overview" (PDF). Bulletin of the IEEE Computer Society Technical Committee on Data Engineering. 24 (4): 35–43.
  8. ^ Mark Sanderson & W. Bruce Croft (2012). "The History of Information Retrieval Research". Proceedings of the IEEE. 100: 1444–1451. doi:10.1109/jproc.2012.2189916.
  9. ^ JE Holmstrom (1948). "'Section III. Opening Plenary Session". The Royal Society Scientific Information Conference, 21 June-2 July 1948: Report and Papers Submitted: 85.
  10. ^ "The Anatomy of a Search Engine". infolab.stanford.edu. Retrieved 2025-08-14.
  11. ^ Uyar, Ahmet; Aliyu, Farouk Musa (2025-08-14). "Evaluating search features of Google Knowledge Graph and Bing Satori: Entity types, list searches and query interfaces". Online Information Review. 39 (2): 197–213. doi:10.1108/OIR-10-2014-0257. ISSN 1468-4527.
  12. ^ Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805 [cs.CL].
  13. ^ Gardazi, Nadia Mushtaq; Daud, Ali; Malik, Muhammad Kamran; Bukhari, Amal; Alsahfi, Tariq; Alshemaimri, Bader (2025-08-14). "BERT applications in natural language processing: a review". Artificial Intelligence Review. 58 (6): 166. doi:10.1007/s10462-025-11162-5. ISSN 1573-7462.
  14. ^ Bajaj, Payal; Campos, Daniel; Craswell, Nick; Deng, Li; Gao, Jianfeng; Liu, Xiaodong; Majumder, Rangan; McNamara, Andrew; Mitra, Bhaskar; Nguyen, Tri; Rosenberg, Mir; Song, Xia; Stoica, Alina; Tiwary, Saurabh; Wang, Tong (2016). "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset". arXiv:1611.09268 [cs.CL].
  15. ^ Craswell, Nick; Mitra, Bhaskar; Yilmaz, Emine; Rahmani, Hossein A.; Campos, Daniel; Lin, Jimmy; Voorhees, Ellen M.; Soboroff, Ian (2025-08-14). "Overview of the TREC 2023 Deep Learning Track". {{cite journal}}: Cite journal requires |journal= (help)
  16. ^ arXiv:2107.09226
  17. ^ Khattab, Omar; Zaharia, Matei (2025-08-14). "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT". Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '20. New York, NY, USA: Association for Computing Machinery. pp. 39–48. doi:10.1145/3397271.3401075. ISBN 978-1-4503-8016-4.
  18. ^ Lin, Jimmy; Nogueira, Rodrigo; Yates, Andrew (2020). "Pretrained Transformers for Text Ranking: BERT and Beyond". arXiv:2010.06467 [cs.IR].
  19. ^ Kim, Dohyun; Zhao, Lina; Chung, Eric; Park, Eun-Jae (2021). "Pressure-robust staggered DG methods for the Navier-Stokes equations on general meshes". arXiv:2107.09226 [math.NA].
  20. ^ a b Thakur, Nandan; Reimers, Nils; Rücklé, Andreas; Srivastava, Abhishek; Gurevych, Iryna (2021). "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models". arXiv:2104.08663 [cs.IR].
  21. ^ Lau, Jey Han; Armendariz, Carlos; Lappin, Shalom; Purver, Matthew; Shu, Chang (2020). Johnson, Mark; Roark, Brian; Nenkova, Ani (eds.). "How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context". Transactions of the Association for Computational Linguistics. 8: 296–310. doi:10.1162/tacl_a_00315.
  22. ^ Arabzadeh, Negar; Yan, Xinyi; Clarke, Charles L. A. (2021). "Predicting Efficiency/Effectiveness Trade-offs for Dense vs. Sparse Retrieval Strategy Selection". arXiv:2109.10739 [cs.IR].
  23. ^ Lin, Jimmy; Nogueira, Rodrigo; Yates, Andrew (2020). "Pretrained Transformers for Text Ranking: BERT and Beyond". arXiv:2010.06467 [cs.IR].
  24. ^ Mooers, Calvin N.; The Theory of Digital Handling of Non-numerical Information and its Implications to Machine Economics (Zator Technical Bulletin No. 48), cited in Fairthorne, R. A. (1958). "Automatic Retrieval of Recorded Information". The Computer Journal. 1 (1): 37. doi:10.1093/comjnl/1.1.36.
  25. ^ Doyle, Lauren; Becker, Joseph (1975). Information Retrieval and Processing. Melville. pp. 410 pp. ISBN 978-0-471-22151-7.
  26. ^ Perry, James W.; Kent, Allen; Berry, Madeline M. (1955). "Machine literature searching X. Machine language; factors underlying its design and development". American Documentation. 6 (4): 242–254. doi:10.1002/asi.5090060411.
  27. ^ Maron, Melvin E. (2008). "An Historical Note on the Origins of Probabilistic Indexing" (PDF). Information Processing and Management. 44 (2): 971–972. doi:10.1016/j.ipm.2007.02.012.
  28. ^ N. Jardine, C.J. van Rijsbergen (December 1971). "The use of hierarchic clustering in information retrieval". Information Storage and Retrieval. 7 (5): 217–240. doi:10.1016/0020-0271(71)90051-9.
  29. ^ Doszkocs, T.E. & Rapp, B.A. (1979). "Searching MEDLINE in English: a Prototype User Interface with Natural Language Query, Ranked Output, and relevance feedback," In: Proceedings of the ASIS Annual Meeting, 16: 131–139.
  30. ^ Korfhage, Robert R. (1997). Information Storage and Retrieval. Wiley. pp. 368 pp. ISBN 978-0-471-14338-3.
  31. ^ "The Anatomy of a Search Engine". infolab.stanford.edu. Retrieved 2025-08-14.
  32. ^ "History of Wikipedia", Wikipedia, 2025-08-14, retrieved 2025-08-14
  33. ^ Uyar, Ahmet; Aliyu, Farouk Musa (2025-08-14). "Evaluating search features of Google Knowledge Graph and Bing Satori: Entity types, list searches and query interfaces". Online Information Review. 39 (2): 197–213. doi:10.1108/OIR-10-2014-0257. ISSN 1468-4527.
  34. ^ Sullivan, Danny (2025-08-14). "FAQ: All About The New Google "Hummingbird" Algorithm". Search Engine Land. Retrieved 2025-08-14.
  35. ^ Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805 [cs.CL].
  36. ^ Bajaj, Payal; Campos, Daniel; Craswell, Nick; Deng, Li; Gao, Jianfeng; Liu, Xiaodong; Majumder, Rangan; McNamara, Andrew; Mitra, Bhaskar; Nguyen, Tri; Rosenberg, Mir; Song, Xia; Stoica, Alina; Tiwary, Saurabh; Wang, Tong (2016). "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset". arXiv:1611.09268 [cs.CL].
  37. ^ Khattab, Omar; Zaharia, Matei (2020). "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT". arXiv:2004.12832 [cs.IR].
  38. ^ Khattab, Omar; Zaharia, Matei (2025-08-14). "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT". Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '20. New York, NY, USA: Association for Computing Machinery. pp. 39–48. doi:10.1145/3397271.3401075. ISBN 978-1-4503-8016-4.
  39. ^ Jones, Rosie; Zamani, Hamed; Schedl, Markus; Chen, Ching-Wei; Reddy, Sravana; Clifton, Ann; Karlgren, Jussi; Hashemi, Helia; Pappu, Aasish; Nazari, Zahra; Yang, Longqi; Semerci, Oguz; Bouchard, Hugues; Carterette, Ben (2025-08-14). "Current Challenges and Future Directions in Podcast Information Access". Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '21. New York, NY, USA: Association for Computing Machinery. pp. 1554–1565. arXiv:2106.09227. doi:10.1145/3404835.3462805. ISBN 978-1-4503-8037-9.
  40. ^ Thakur, Nandan; Reimers, Nils; Rücklé, Andreas; Srivastava, Abhishek; Gurevych, Iryna (2021). "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models". arXiv:2104.08663 [cs.IR].

Further reading

[edit]
[edit]
心衰竭是什么病 芊字五行属什么 便秘什么意思 人生有什么意义 生龙活虎是什么意思
生殖器疱疹用什么药 氯偏高是什么原因 放任是什么意思 表白墙是什么 肺结节增殖灶什么意思
眷属是什么意思 吃什么补充dha nothomme什么牌子 晚上喝什么茶不影响睡眠 女性喝什么利尿最快
moda是什么牌子 星月菩提是什么材质 龟头感染用什么药 癸卯是什么意思 螃蟹吐泡泡是什么原因
cva医学上是什么意思cl108k.com 中国反导弹系统叫什么kuyehao.com 新生儿五行缺什么查询hcv8jop0ns8r.cn 一热就头疼是什么原因hcv8jop8ns6r.cn 三人死亡属于什么事故hcv9jop0ns0r.cn
10月15日是什么星座jasonfriends.com 去脚气用什么药最好hcv8jop8ns5r.cn 生灵涂炭是什么意思hcv8jop3ns7r.cn 喝什么travellingsim.com 梦见蝎子是什么预兆hcv8jop5ns8r.cn
黄体不足吃什么hcv9jop2ns6r.cn 月与什么有关hcv9jop8ns3r.cn 文盲是什么意思hcv9jop0ns9r.cn apc是什么药hcv7jop5ns2r.cn 顺铂是什么药hcv9jop2ns6r.cn
女性检查生育挂什么科hcv9jop8ns3r.cn 现在什么冰箱最好hcv9jop5ns8r.cn 副团长是什么军衔96micro.com 什么是m属性hcv9jop2ns4r.cn 女性尿急憋不住尿是什么原因hcv9jop8ns0r.cn
百度