化工厂是干什么的| 荨麻疹有什么忌口吗| 50公斤发什么物流便宜| 脸上长痘是什么原因| 合财是什么意思| 什么是安全期| 肛门痒是什么原因男性| ab型rh阳性是什么意思| 丝瓜络是什么东西| 狗狗能看见什么颜色| 具象是什么意思| 细菌感染是什么引起的| 貔貅是什么动物| 人生得意须尽欢是什么意思| 晒太阳补什么| 什么是慈悲| cr是什么| 门昌念什么| 梦到地震是什么意思| 多愁善感是什么意思| 6.1什么星座| 什么是南红| 夏至未至什么意思| 左大腿外侧麻木是什么原因| 承蒙不弃什么意思| mg是什么意思| 谨字五行属什么| 奠基什么意思| fte是什么意思| 正常白带是什么样的| 爱恨情仇是什么意思| 圆是什么结构| 不忘初心方得始终是什么意思| kj是什么单位| 休学需要什么条件| 人为什么会自杀| 罄竹难书是什么意思| 晨勃是什么意思| 下巴两边长痘痘是什么原因| 坐月子什么不能吃| 取关是什么意思| 腿困是什么原因| 气血不足是什么症状| 7月8号是什么星座| 12月出生是什么星座| 瞬息万变什么意思| 5个月宝宝可以吃什么水果| 女性胆囊炎有什么症状| 脸小适合什么发型| 寒冷性荨麻疹是什么原因引起的| 粳米是什么米| 女人喝什么茶对身体好| 鸡为什么吃自己下的蛋| 跟腱为什么会断裂| 推油是什么意思| 心血虚吃什么中成药| 肺部斑片状高密度影是什么意思| 7月15什么星座| lop胎位是什么意思| 直肠炎用什么药效果最好| 1870年是什么朝代| sc是什么意思| 心率是什么意思| 什么拉车连蹦带跳歇后语| 肩膀疼应该挂什么科| 09年是什么年| 梦见自己把头发剪短了是什么意思| gi食物是什么意思| 桑蚕丝用什么洗最好| 心慌是什么原因导致的| 胃低分化腺癌是什么意思| 耍小聪明是什么意思| 炖牛肉放什么| 爰是什么意思| 做梦梦到钱是什么预兆| 拔完牙吃什么| a型血的人是什么性格| 胎膜早破是什么原因引起的| 烫伤擦什么药膏| 反应性细胞改变炎症是什么意思| 什么品牌的帽子好| 胆囊炎用什么药| 什么人适合喝蛋白粉| 棉是什么面料| 副镇长是什么级别| 整个手掌发红是什么原因| 人为什么会过敏| 失眠吃什么| 手麻脚麻是什么原因| 检测怀孕最准确的方法是什么| 阴血亏虚吃什么中成药| 经期可以吃什么水果| bigbang什么意思| 洋葱什么时候收获| 静电是什么| 米白色是什么颜色| 四月十七是什么星座| 黄历冲生肖是什么意思| 硝酸咪康唑乳膏和酮康唑乳膏有什么区别| 知了猴是什么| 平诊是什么意思| NT是什么钱| 爵迹小说为什么不写了| 小孩流鼻血挂什么科| 嘴苦是什么原因引起的| 子宫直肠凹积液是什么意思| 乡和镇的区别是什么| 女命正印代表什么| 大肠杆菌感染吃什么药| 家有喜事指什么生肖| 坐怀不乱是什么意思| 脚底麻是什么原因| 软组织肿胀是什么意思| 知了代表什么生肖| 人间烟火是什么意思| 肠易激综合症什么症状| 为什么有的女人欲太强| 狗取什么名字好| 马牙是什么原因引起的| 夏季吃什么水果好| 通五行属什么| 鱼油什么牌子好| 减肥吃什么主食比较好| 陆代表什么生肖| 医学上pi是什么意思| 什么时候立秋| 肝气不舒有什么症状| 月经不能吃什么东西| 睡莲和碗莲有什么区别| 脾虚喝什么泡水比较好| 孝顺的真正含义是什么| 李时珍的皮是什么意思| 公主切适合什么脸型| 女性寒性体质喝什么茶| 血沉高说明什么问题| 桑叶有什么功效| 石榴红是什么颜色| 释怀和释然有什么区别| 心境是什么意思| 乏了是什么意思| 什么叫甲沟炎| 胃部间质瘤是什么性质的瘤| 破釜沉舟什么意思| 香醋是什么醋| 白内障什么原因造成的| 小资生活是什么意思| 看病人送什么| 五色土有什么风水作用| 养囊是什么意思| 多动症是什么原因造成| 热水器什么牌子好| 胆碱酯酶高是什么原因| 蓝脸的窦尔敦盗御马是什么歌| 眼球内容物包括什么| 幽门螺旋杆菌抗体阳性是什么意思| 鱼腥草有什么功效| 颈椎ct能检查出什么| 猪巴皮是什么材质| 大姨妈量少什么原因| 你为什么不说话歌词| 清鼻涕是什么感冒| hpa是什么病| 结肠炎吃什么食物好| 邪不压正什么意思| 龋病是什么意思| 为什么子宫会下垂| 月经量少吃什么好| 剁椒鱼头属于什么菜系| 右眼皮上长痣代表什么| 脚腕肿是什么原因| 荨麻疹是什么症状| 脚底发黄是什么原因| 放疗有什么副作用| 什么是阳气| 伤寒是什么病| 什么动听四字词语| 打下手什么意思| 猫能吃什么人吃的东西| 生物包括什么| 拔智齿挂什么科| 脱肛吃什么药最有效| 脸颊两边长斑是什么原因| 三七长什么样| 为什么屁多是什么原因| 耳朵痒是什么原因引起的| 大拇指戴戒指是什么意思| mi是什么| 东南西北五行属什么| 群众路线是什么| 子宫肌瘤长在什么位置| 牛男和什么属相最配| 脱发吃什么| 医院面试一般会问什么| 复查是什么意思| 冰点脱毛的原理是什么| 小米叫什么| 银杏叶像什么| 克氏针是什么| 喝冰美式有什么好处| 全血铅测定是什么意思| 二月是什么星座| 十月二十三号是什么星座| 肺纤维化是什么意思| 劳动局全称叫什么| 胖次是什么意思| 为什么邓超对鹿晗很好| 尿液发黄是什么病| 经常吐口水是什么原因| 什么不足| 低血糖和贫血有什么区别| Continental什么牌子| tomboy什么意思| 心脏病人吃什么水果好| 四个月念什么| crl是什么意思| 甲壳虫吃什么食物| 飞机杯什么意思| 刀模是什么| 口苦口干是什么原因造成的| 银为什么会变黑| 渣是什么意思| 李果是什么水果| ai是什么| 股票填权是什么意思| 抗结剂是什么| 什么是芝士| 七月十五有什么禁忌| 吃得苦中苦方为人上人是什么意思| pr间期延长是什么意思| 下巴下面长痣代表什么| 肚脐眼上方是什么器官| 1998年属虎的是什么命| 尿酸高是什么原因造成的| 盆腔炎用什么药最好| 葡萄糖偏高有什么问题| fdp偏高是什么原因| 列席是什么意思| 腹膜转移是什么意思| 吃桑葚对身体有什么好处| 风尘是什么意思| 幼儿园学什么| 进是什么结构| 当我谈跑步时我谈些什么| 牙齿酸软是什么原因| 大地鱼是什么鱼| 习俗是什么意思| 庆字五行属什么| 超敏c反应蛋白高是什么意思| 咳嗽有痰吃什么药效果好| 女性肾虚吃什么补最好最快| 清明上河图什么季节| 拜你所赐什么意思| 检查生育能力挂什么科| 腰间盘突出压迫神经腿疼吃什么药| 双胞胎是什么意思| 不偏不倚是什么意思| 左顾右盼的顾是什么意思| 什么是浸润性乳腺癌| 肺不好有什么症状| 什么是朋友| 血压低压低是什么原因| 阿莫西林是治什么的| 什么怎么什么造句| 我国计划生育什么时候开始| 百度Jump to content

《领地人生》公布游戏新手阶段5小时体验视频

From Wikipedia, the free encyclopedia
百度 【精彩荐读】凤凰国学四季短片之春季篇:

In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.

For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important.

For example, cycle sort and timsort are both algorithms to sort a list of items from smallest to largest. Cycle sort organizes the list in time proportional to the number of elements squared (, see Big O notation), but minimizes the writes to the original array and only requires a small amount of extra memory which is constant with respect to the length of the list (). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length (), but has a space requirement linear in the length of the list (). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the program/erase cycles and memory footprint of the sorting is more important, cycle sort is a better choice.

Background

[edit]

The importance of efficiency with respect to time was emphasized by Ada Lovelace in 1843 as applied to Charles Babbage's mechanical analytical engine:

"In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"[1]

Early electronic computers had both limited speed and limited random access memory. Therefore, a space–time trade-off occurred. A task could use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was therefore to use the fastest algorithm that could fit in the available memory.

Modern computers are significantly faster than early computers and have a much larger amount of memory available (gigabytes instead of kilobytes). Nevertheless, Donald Knuth emphasized that efficiency is still an important consideration:

"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"[2]

Overview

[edit]

An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to the approximate doubling of computer power every 2 years, tasks that are acceptably efficient on modern smartphones and embedded systems may have been unacceptably inefficient for industrial servers 10 years ago.

Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer.

There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption, total cost of ownership, response time to external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, some sorting algorithms perform poorly on data which is already sorted, or which is sorted in reverse order.

In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate to optimization issues.

Theoretical analysis

[edit]

In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" is Donald Knuth's Big O notation, representing the complexity of an algorithm as a function of the size of the input . Big O notation is an asymptotic measure of function complexity, where roughly means the time requirement for an algorithm is proportional to , omitting lower-order terms that contribute less than to the growth of the function as grows arbitrarily large. This estimate may be misleading when is small, but is generally sufficiently accurate when is large as the notation is asymptotic. For example, bubble sort may be faster than merge sort when only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms that scale efficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs.

Some examples of Big O notation applied to algorithms' asymptotic time complexity include:

Notation Name Examples
constant Finding the median from a sorted list of measurements; Using a constant-size lookup table; Using a suitable hash function for looking up an item.
logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap.
linear Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two n-bit integers by ripple carry.
linearithmic, loglinear, or quasilinear Performing a Fast Fourier transform; heapsort, quicksort (best and average case), or merge sort
quadratic Multiplying two n-digit numbers by a simple algorithm; bubble sort (worst case or naive implementation), Shell sort, quicksort (worst case), selection sort or insertion sort
exponential Finding the optimal (non-approximate) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search

Measuring performance

[edit]

For new versions of software or to provide comparisons with competitive systems, benchmarks are sometimes used, which assist with gauging an algorithms relative performance. If a new sort algorithm is produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in the mainframe world certain proprietary sort products from independent software companies such as Syncsort compete with products from the major suppliers such as IBM for speed.

Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example[3][4] and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages.

Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example.[5]

Implementation concerns

[edit]

Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded,[6] or the choice of a compiler for a particular language, or the compilation options used, or even the operating system being used. In many cases a language implemented by an interpreter may be much slower than a language implemented by a compiler.[3] See the articles on just-in-time compilation and interpreted languages.

There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these include data alignment, data granularity, cache locality, cache coherency, garbage collection, instruction-level parallelism, multi-threading (at either a hardware or software level), simultaneous multitasking, and subroutine calls.[7]

Some processors have capabilities for vector processing, which allow a single instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use of parallel processing, or they could be easily reconfigured. As parallel and distributed computing grow in importance in the late 2010s, more investments are being made into efficient high-level APIs for parallel and distributed computing systems such as CUDA, TensorFlow, Hadoop, OpenMP and MPI.

Another problem which can arise in programming is that processors compatible with the same instruction set (such as x86-64 or ARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges to optimizing compilers, which must have extensive knowledge of the specific CPU and other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced to emulate instructions not supported on a compilation target platform, forcing it to generate code or link an external library call to produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case in embedded systems with respect to floating-point arithmetic, where small and low-power microcontrollers often lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations.

Measures of resource usage

[edit]

Measures are normally expressed as a function of the size of the input .

The two most common measures are:

  • Time: how long does the algorithm take to complete?
  • Space: how much working memory (typically RAM) is needed by the algorithm? This has two aspects: the amount of memory needed by the code (auxiliary space usage), and the amount of memory needed for the data on which the code operates (intrinsic space usage).

For computers whose power is supplied by a battery (e.g. laptops and smartphones), or for very long/large calculations (e.g. supercomputers), other measures of interest are:

  • Direct power consumption: power needed directly to operate the computer.
  • Indirect power consumption: power needed for cooling, lighting, etc.

As of 2018, power consumption is growing as an important metric for computational tasks of all types and at all scales ranging from embedded Internet of things devices to system-on-chip devices to server farms. This trend is often referred to as green computing.

Less common measures of computational efficiency may also be relevant in some cases:

  • Transmission size: bandwidth could be a limiting factor. Data compression can be used to reduce the amount of data to be transmitted. Displaying a picture or image (e.g. Google logo) can result in transmitting tens of thousands of bytes (48K in this case) compared with transmitting six bytes for the text "Google". This is important for I/O bound computing tasks.
  • External space: space needed on a disk or other external memory device; this could be for temporary storage while the algorithm is being carried out, or it could be long-term storage needed to be carried forward for future reference.
  • Response time (latency): this is particularly relevant in a real-time application when the computer system must respond quickly to some external event.
  • Total cost of ownership: particularly if a computer is dedicated to one particular algorithm.

Time

[edit]

Theory

[edit]

Analysis of algorithms, typically using concepts like time complexity, can be used to get an estimate of the running time as a function of the size of the input data. The result is normally expressed using Big O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance. Parallel algorithms may be more difficult to analyze.

Practice

[edit]

A benchmark can be used to assess the performance of an algorithm in practice. Many programming languages have an available function which provides CPU time usage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests.

Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in a multi-processing and multi-programming environment.

This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions.

Space

[edit]

This section is concerned with use of memory resources (registers, cache, RAM, virtual memory, secondary memory) while the algorithm is being executed. As for time analysis above, analyze the algorithm, typically using space complexity analysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed using Big O notation.

There are up to four aspects of memory usage to consider:

  • The amount of memory needed to hold the code for the algorithm.
  • The amount of memory needed for the input data.
  • The amount of memory needed for any output data.
    • Some algorithms, such as sorting, often rearrange the input data and do not need any additional space for output data. This property is referred to as "in-place" operation.
  • The amount of memory needed as working space during the calculation.

Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949 Electronic Delay Storage Automatic Calculator (EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 Sinclair ZX80 came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical for personal computers to have between 4 and 32 GB of RAM, an increase of over 300 million times as much memory.

Caching and memory hierarchy

[edit]

Modern computers can have relatively large amounts of memory (possibly gigabytes), so having to squeeze an algorithm into a confined amount of memory is not the kind of problem it used to be. However, the different types of memory and their relative access speeds can be significant:

  • Processor registers, are the fastest memory with the least amount of space. Most direct computation on modern computers occurs with source and destination operands in registers before being updated to the cache, main memory and virtual memory if needed. On a processor core, there are typically on the order of hundreds of bytes or fewer of register availability, although a register file may contain more physical registers than architectural registers defined in the instruction set architecture.
  • Cache memory is the second fastest, and second smallest, available in the memory hierarchy. Caches are present in processors such as CPUs or GPUs, where they are typically implemented in static RAM, though they can also be found in peripherals such as disk drives. Processor caches often have their own multi-level hierarchy; lower levels are larger, slower and typically shared between processor cores in multi-core processors. In order to process operands in cache memory, a processing unit must fetch the data from the cache, perform the operation in registers and write the data back to the cache. This operates at speeds comparable (about 2-10 times slower) with the CPU or GPU's arithmetic logic unit or floating-point unit if in the L1 cache.[8] It is about 10 times slower if there is an L1 cache miss and it must be retrieved from and written to the L2 cache, and a further 10 times slower if there is an L2 cache miss and it must be retrieved from an L3 cache, if present.
  • Main physical memory is most often implemented in dynamic RAM (DRAM). The main memory is much larger (typically gigabytes compared to ≈8 megabytes) than an L3 CPU cache, with read and write latencies typically 10-100 times slower.[8] As of 2018, RAM is increasingly implemented on-chip of processors, as CPU or GPU memory.[citation needed]
  • Paged memory, often used for virtual memory management, is memory stored in secondary storage such as a hard disk, and is an extension to the memory hierarchy which allows use of a potentially larger storage space, at the cost of much higher latency, typically around 1000 times slower than a cache miss for a value in RAM.[8] While originally motivated to create the impression of higher amounts of memory being available than were truly available, virtual memory is more important in contemporary usage for its time-space tradeoff and enabling the usage of virtual machines.[8] Cache misses from main memory are called page faults, and incur huge performance penalties on programs.

An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to paging. Because of this, cache replacement policies are extremely important to high-performance computing, as are cache-aware programming and data alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another.

In the early days of electronic computing, if an algorithm and its data would not fit in main memory then the algorithm could not be used. Nowadays the use of virtual memory appears to provide much more memory, but at the cost of performance. Much higher speed can be obtained if an algorithm and its data fit in cache memory; in this case minimizing space will also help minimize time. This is called the principle of locality, and can be subdivided into locality of reference, spatial locality, and temporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well.

See also

[edit]

References

[edit]
  1. ^ Green, Christopher, Classics in the History of Psychology, retrieved 19 May 2013
  2. ^ Knuth, Donald (1974), "Structured Programming with go-to Statements" (PDF), Computing Surveys, 6 (4): 261–301, CiteSeerX 10.1.1.103.6084, doi:10.1145/356635.356640, S2CID 207630080, archived from the original (PDF) on 24 August 2009, retrieved 19 May 2013
  3. ^ a b "Floating Point Benchmark: Comparing Languages (Fourmilog: None Dare Call It Reason)". Fourmilab.ch. 4 August 2005. Retrieved 14 December 2011.
  4. ^ "Whetstone Benchmark History". Roylongbottom.org.uk. Retrieved 14 December 2011.
  5. ^ OSNews Staff. "Nine Language Performance Round-up: Benchmarking Math & File I/O". osnews.com. Retrieved 18 September 2018.
  6. ^ Kriegel, Hans-Peter; Schubert, Erich; Zimek, Arthur (2016). "The (black) art of runtime evaluation: Are we comparing algorithms or implementations?". Knowledge and Information Systems. 52 (2): 341–378. doi:10.1007/s10115-016-1004-2. ISSN 0219-1377. S2CID 40772241.
  7. ^ Guy Lewis Steele, Jr. "Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO". MIT AI Lab. AI Lab Memo AIM-443. October 1977.[1]
  8. ^ a b c d Hennessy, John L; Patterson, David A; Asanovi?, Krste; Bakos, Jason D; Colwell, Robert P; Bhattacharjee, Abhishek; Conte, Thomas M; Duato, José; Franklin, Diana; Goldberg, David; Jouppi, Norman P; Li, Sheng; Muralimanohar, Naveen; Peterson, Gregory D; Pinkston, Timothy Mark; Ranganathan, Prakash; Wood, David Allen; Young, Clifford; Zaky, Amr (2011). Computer Architecture: a Quantitative Approach (Sixth ed.). Elsevier Science. ISBN 978-0128119051. OCLC 983459758.
白痰是什么原因 女人为什么会阳虚 什么时间英语 大豆是指什么豆 bu是什么颜色
为什么身上一热就痒 猪心炖什么补气补血 黄瓜吃多了有什么坏处 3点是什么时辰 辣椒炒肉用什么肉
96199是什么电话 亚临床甲减是什么意思 恢弘是什么意思 女属羊和什么属相最配 胃阴虚吃什么药
ab血型和o型生的孩子是什么血型 拿什么证明分居两年 什么一边什么一边什么 男性检查挂什么科 血象高是什么意思
高回声结节是什么意思hcv9jop3ns8r.cn 乌鸡白凤丸适合什么人吃hcv7jop6ns4r.cn 什么是围绝经期bjcbxg.com 左卵巢囊性结构是什么意思hcv7jop5ns2r.cn 落叶像什么hcv8jop2ns5r.cn
cts是什么意思dayuxmw.com 痤疮涂什么药膏hcv9jop0ns2r.cn 狗上皮过敏是什么意思hcv9jop3ns1r.cn 亲嘴会传染什么病hcv8jop1ns0r.cn 尿多吃什么药hcv8jop9ns8r.cn
绿豆汤为什么是红色的dajiketang.com 维生素b12高是什么原因hcv7jop7ns0r.cn 水厄痣是什么意思hebeidezhi.com 橙子什么季节成熟hcv8jop3ns9r.cn 喉咙痛看什么科hcv8jop6ns9r.cn
湖南有什么景点wzqsfys.com 嘴唇上长痣代表什么hcv8jop7ns6r.cn 什么不得什么hcv8jop3ns9r.cn 果脯是什么hcv9jop4ns5r.cn nlp是什么hcv7jop9ns7r.cn
百度