scholarly journals The spatial information content of the honey bee waggle dance

Author(s):  
Roger Schürch ◽  
Francis L. W. Ratnieks
2021 ◽  
Author(s):  
Matthew J. Hasenjager ◽  
William Hoppitt ◽  
Ellouise Leadbeater

AbstractHoneybees famously use waggle dances to communicate foraging locations to nestmates in the hive, thereby recruiting them to those sites. The decision to dance is governed by rules that, when operating collectively, are assumed to direct foragers to the most profitable locations with little input from potential recruits, who are presumed to respond similarly to any dance regardless of its information content. Yet variation in receiver responses can qualitatively alter collective outcomes. Here, we use network-based diffusion analysis to compare the collective influence of dance information during recruitment to feeders at different distances. We further assess how any such effects might be achieved at the individual level by dance-followers either persisting with known sites when novel targets are distant and/or seeking more accurate spatial information to guide long-distance searches. Contrary to predictions, we found no evidence that dance-followers’ responses depended on target distance. While dance information was always key to feeder discovery, its importance did not vary with feeder distance, and bees were in fact quicker to abandon previously rewarding sites for distant alternatives. These findings provide empirical support for the longstanding assumption that self-organized foraging by honeybee colonies relies heavily on signal performance rules with limited input from recipients.


2020 ◽  
Vol 37 (2) ◽  
pp. 227-235 ◽  
Author(s):  
John I. Broussard ◽  
John B. Redell ◽  
Jing Zhao ◽  
Mark E. Maynard ◽  
Nobuhide Kobori ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Ying Zhou ◽  
Dazhuan Xu ◽  
Chao Shi ◽  
Weilin Tu ◽  
Junpeng Shi

In this paper, the mutual information between the received signals and the source in the coprime linear array is investigated. In Shannon’s information theory, the mutual information is used to quantify the reduction in the priori uncertainty of the transmitted message. Similarly, the spatial information in the coprime array is the mutual information between direction of arrival (DOA), source amplitude, and received signals. Such information content is composed of two parts. The first part is DOA information, and the second one is scattering information. In a single source scenario, we derive the theoretical expression and its asymptotic upper bound of DOA information. The corresponding expression of scattering information is also formulated theoretically. Besides, the application of spatial information is discussed. We can obtain the optimal array configuration by maximizing the DOA information of the coprime array. Similarly, the information is also used to quantify the performance difference between the coprime array and uniform array. In addition, the entropy error is employed to evaluate the estimation performance based on spatial information. Numerical simulation of the information content confirms our theoretical analysis. The results in this paper have important guiding significance for the design of the coprime array in the actual environment.


1998 ◽  
Vol 01 (02n03) ◽  
pp. 267-282 ◽  
Author(s):  
Carl Anderson

Honey bee nectar foragers returning to the hive experience a delay as they search for a receiver bee to whom they transfer their material. In this paper I describe the simulation of the "threshold rule" (Seeley, 1995) which relates the magnitude of this search delay to the probability of performing a recriutment dance — waggle dance, tremble dance, or no dance. Results show that this rule leads to self-organised near-optimal worker allocation in a fluctuating environment, is extremely robust, and operates over a wide range of parameter values. The reason for the robustness appears to be the particular sytem of feedbacks that operate within the system.


Ethology ◽  
2017 ◽  
Vol 123 (12) ◽  
pp. 974-980 ◽  
Author(s):  
Sylwia Łopuch ◽  
Adam Tofilski

2015 ◽  
Vol 213 ◽  
pp. 265-271 ◽  
Author(s):  
Nicholas J. Balfour ◽  
Katherine A. Fensome ◽  
Elizabeth E.W. Samuelson ◽  
Francis L.W. Ratnieks

2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Hong Zhang ◽  
Peichao Gao ◽  
Zhilin Li

<p><strong>Abstract.</strong> Spatial information is fundamentally important to our daily life. It has been estimated by many scholars that almost 80 percent or more of all information in this world are spatially referenced and can be regarded as spatial information. Given such importance, a discipline called spatial information theory has been formed since the late 20th century. In addition, international conferences on spatial information have been frequently held. For example, COSIT (Conference on Spatial Information Theory) was established in 1993 and are held every two years all over the world.</p><p>In spatial information theory, one fundamental question is how to measure the amount of information (i.e., information content) of a spatial dataset. A widely used method is to employ entropy, which is proposed by the American mathematician Claude Shannon in 1948 and usually referred to as Shannon entropy or information entropy. This information entropy was originally designed to measure the statistical information content of a telegraph message. However, a spatial dataset such as a map or a remote sensing image contains not only statistical information but also spatial information, which cannot be measured by using the information entropy.</p><p>As a consequence, considerable efforts have been made to improve the information entropy for spatial datasets in either a vector format of a raster format. There are two basic lines of thought. The first is to improve the information entropy by defining how to calculate its probability parameters, and the other is to introduce new parameters into the formula of the information entropy. The former results in a number of improved information entropies, while the latter leads to a series of variants of the information entropy. Both seem to be capable of distinguishing different spatial datasets, but there is a lack of comprehensive evaluation of their performance in measuring spatial information.</p><p>This study first presents a state-of-the-art review of the improvements to the information entropy for the information content of spatial datasets in a raster format (i.e., raster spatial data, such as a grey image and a digital elevation model). Then, it presents a comprehensive evaluation of the resultant measures (either improved information entropies or variants of the information entropy) according to the Second Law of Thermodynamics. A set of evaluation criteria were proposed, as well as corresponding measures. All resultant measures were ranked accordingly.</p><p>The results reported in this study should be useful for entropic spatial data analysis. For example, in image fusion, a crucial question is how to evaluate the performance of a fusion algorithm. This evaluation is usually achieved by using the information entropy to measure the increase in the information content during the fusion. It can now be performed by the best-improved information entropy reported in this study.</p>


2019 ◽  
Vol 28 (15) ◽  
pp. 3602-3611 ◽  
Author(s):  
Fabian Nürnberger ◽  
Alexander Keller ◽  
Stephan Härtel ◽  
Ingolf Steffan‐Dewenter

Sign in / Sign up

Export Citation Format

Share Document