Asymmetries in the Sequential Learning of Brand Associations: Implications for the Early Entrant Advantage

2009 ◽  
Vol 35 (5) ◽  
pp. 788-799 ◽  
Author(s):  
Marcus Cunha ◽  
Juliano Laran
Author(s):  
Burenida Sartika

The research was conducted during April to June 2012. Primary and secondary data were used in this research. The primary data related to respondent demography and brand equity.   One hundred respondents were selected accidentally around Bengkulu city.  Data were analysed quantitatively and qualitatively.  Quantitative data were processed using Test of Validity, Reliability Test (Hoyt), Cochran Test Analysis, Importance Performance Analysis (IPA), and Brand Switching Matrix Pattern.  The result of the research showed that the NU Green Tea brand is the highest one, and 48% Top of Mind value. Brand Recall in Frestea Green get 45,8%.  Brand Recognition asserts 75% among respondences to the existence of  Joytea brand. The result of Unaware Brand reveals the research doesn’t seem to know Joytea Brand. Analyzing of Branded Association declares that NU Green Tea has 12 brand associations, Frestea Green brand has 10 brand associations, and in other side Joytea Green brand has 7 brand associations which each made brand image. More results for associations Cochran testing, means the consumers aware to the attributes of the product. It’s included that NU Green Tea brand of drinking package has most of the brand image than  Frestea and Joytea Green. Consumer’s perceptions analyzing reveals that NU Green Tea has a good effort as to the respondents want to. Frestea Green must to get more effort to satisfy consumers. At least Joytea Green brand should be awared enough to increase their ability to get more result in consumer satisfying.  Consumers’ loyalty analysis showed that the Frestea brand has good consumers loyalty than NU Green Tea and Joytea Green brand.Keywords: brand equity, ready to drink green tea


Author(s):  
Сергей Старов ◽  
Игорь Гладких ◽  
Даниил Муравский
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4496
Author(s):  
Vlad Pandelea ◽  
Edoardo Ragusa ◽  
Tommaso Apicella ◽  
Paolo Gastaldo ◽  
Erik Cambria

Emotion recognition, among other natural language processing tasks, has greatly benefited from the use of large transformer models. Deploying these models on resource-constrained devices, however, is a major challenge due to their computational cost. In this paper, we show that the combination of large transformers, as high-quality feature extractors, and simple hardware-friendly classifiers based on linear separators can achieve competitive performance while allowing real-time inference and fast training. Various solutions including batch and Online Sequential Learning are analyzed. Additionally, our experiments show that latency and performance can be further improved via dimensionality reduction and pre-training, respectively. The resulting system is implemented on two types of edge device, namely an edge accelerator and two smartphones.


Author(s):  
Adhri Nandini Paul ◽  
Peizhi Yan ◽  
Yimin Yang ◽  
Hui Zhang ◽  
Shan Du ◽  
...  

2021 ◽  
Vol 295 ◽  
pp. 117159
Author(s):  
Domenic Cipollone ◽  
Hui Yang ◽  
Feng Yang ◽  
Joeseph Bright ◽  
Botong Liu ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 222
Author(s):  
Baigan Zhao ◽  
Yingping Huang ◽  
Hongjian Wei ◽  
Xing Hu

Visual odometry (VO) refers to incremental estimation of the motion state of an agent (e.g., vehicle and robot) by using image information, and is a key component of modern localization and navigation systems. Addressing the monocular VO problem, this paper presents a novel end-to-end network for estimation of camera ego-motion. The network learns the latent subspace of optical flow (OF) and models sequential dynamics so that the motion estimation is constrained by the relations between sequential images. We compute the OF field of consecutive images and extract the latent OF representation in a self-encoding manner. A Recurrent Neural Network is then followed to examine the OF changes, i.e., to conduct sequential learning. The extracted sequential OF subspace is used to compute the regression of the 6-dimensional pose vector. We derive three models with different network structures and different training schemes: LS-CNN-VO, LS-AE-VO, and LS-RCNN-VO. Particularly, we separately train the encoder in an unsupervised manner. By this means, we avoid non-convergence during the training of the whole network and allow more generalized and effective feature representation. Substantial experiments have been conducted on KITTI and Malaga datasets, and the results demonstrate that our LS-RCNN-VO outperforms the existing learning-based VO approaches.


Sign in / Sign up

Export Citation Format

Share Document