scholarly journals International Real Estate Review

2019 ◽  
Vol 22 (4) ◽  
pp. 535-570
Author(s):  
Chongyu Wang ◽  
◽  
Jeffrey P. Cohen ◽  
John L. Glascock ◽  
◽  
...  

The question of whether REITs compete for scarce capital across geographic space is deserving of attention. In this study, we consider the issue of spatial competition among REITs across U.S. states in terms of the degree of interdependence in financial capital demand. First, we motivate the issue with a theoretical model of cost minimization by using a representative REIT in a given U.S. state and demonstrate that a priori, it is unclear whether the capital demand of a REIT depends on that of the REITs in other states. Then we use spatial econometrics techniques and find empirically that REITs compete for financial capital with REITs in other states. We also find evidence of feedback (or indirect) effects, thus implying amplified crowding out of financial capital when other REITs in nearby states increase financial capital demand. Our findings are aligned with the predation hypothesis, which suggests that REIT managers might exploit the financial distress of neighboring REITs and/or investors as an opportunity to steal their market share. Another key contribution of this study is that we focus on capital liquidity as opposed to stock liquidity.

Author(s):  
Michelle Blom ◽  
Slava Shekh ◽  
Don Gossink ◽  
Tim Miller ◽  
Adrian R Pearce

Future defense logistics will be heavily reliant on autonomous vehicles for the transportation of supplies. We consider a dynamic logistics problem in which: multiple supply item types are transported between suppliers and consuming (sink) locations; and autonomous vehicles (road-, sea-, and air-based) make decisions on where to collect and deliver supplies in a decentralized manner. Sink nodes consume dynamically varying demands (whose timing and size are not known a priori). Network arcs, and vehicles, experience failures at times, and for durations, that are not known a priori. These dynamic events are caused by an adversary, seeking to disrupt the network. We design domain-dependent planning algorithms for these vehicles whose primary objective is to minimize the likelihood of stockout events (where insufficient resource is present at a sink to meet demand). Cost minimization is a secondary objective. The performance of these algorithms, across varying scenarios, with and without restrictions on communication between vehicles and network locations, is evaluated using agent-based simulation. We show that stockpiling-based strategies, where quantities of resource are amassed at strategic locations, are most effective on large land-based networks with multiple supply item types, with simpler “shuttling”-based approaches being sufficient otherwise.


2012 ◽  
Vol 11 ◽  
pp. 85 ◽  
Author(s):  
Bruno César Araújo ◽  
Donald Pianto ◽  
Fernanda De Negri ◽  
Luiz Ricardo Cavalcante ◽  
Patrick Alves

Os fundos setoriais foram instituídos no final da década de 1990, com o propósito de criar condições mais estáveis de financiamento público às atividades de ciência, tecnologia e inovação (CT&I) no Brasil. De maneira análoga ao que se observa com outros instrumentos de incentivo à inovação nas empresas, a expectativa é que o acesso aos fundos setoriais contribuiria para o aumento dos esforços tecnológicos e para o alcance de melhores resultados pelas empresas. O objetivo deste trabalho é, portanto, avaliar o impacto desses fundos sobre os esforços tecnológicos e sobre os resultados das empresas industriais no Brasil, no período 2001 a 2006. A base teórica para a discussão é a literatura internacional que tem, recorrentemente, analisado o efeito crowding in ou crowding out de políticas de apoio à inovação nas empresas. Esses trabalhos buscam verificar se as políticas adotadas complementam os recursos alocados nas atividades de inovação pelas empresas ou se haveria simplesmente a substituição desses últimos por recursos públicos. Neste artigo, uma técnica quasi-experimental é aplicada para comparar as empresas que acessaram os fundos setoriais com aquelas que não os acessaram, usando dados de painel que incluem informações sobre esforços tecnológicos e resultados. O grupo de controle é definido com base no algoritmo de Propensity Score Matching (PSM), visando eliminar o viés de seleção no acesso aos fundos, o que faz com que, a priori, as empresas que acessam esses recursos trilhem uma trajetória distinta daquelas que não acessam. Estimativas das diferenças percentuais das taxas de crescimento dos esforços tecnológicos indicam significativo descolamento entre os grupos de tratamento e controle e permitem que se rejeite a hipótese de crowding out. Estima-se que o diferencial na taxa de crescimento do PoTec – que corresponde à proxy para os esforços tecnológicos – seja de 6,8 p.p. no primeiro ano, 11,5 p.p. no segundo, 15,7 p.p. no terceiro e 26,7 p.p. no quarto ano após o acesso. Os fundos setoriais apresentam ainda impacto positivo e significativo no pessoal ocupado total, embora apenas um impacto marginalmente significante nas exportações de alto conteúdo tecnológico tenha sido observado após quatro anos nas empresas que compõem o grupo de tratamento. Adicionalmente, uma análise preliminar dos impactos dos diferentes instrumentos que compõem os fundos setoriais permite associar a maior parte dos impactos dos recursos à concessão de crédito em condições mais favoráveis.


Author(s):  
Dan Griffith

Spatial autocorrelation (SA)—the correlation among georeferenced observations arising from their relative locations in geographic space—has a history dating to the mid-1900s, although conceptual awareness of it dates back to the early 1900s. But SA is everywhere. It manifests itself in one- and two-dimensional synchronizations, exemplified by the experiment involving multiple metronomes sitting on a board that rests on two soda cans (illustrating an indirect, common factor SA source), or the aggregate flashing of fireflies created by their emission into the air of chemicals that stimulate nearby fireflies to light (illustrating a direct spatial interaction SA source). This latter outcome also can arise from mimicking behavior, as occurs with bandit bumble bees in a single meadow (i.e., when they rob a yellow rattler’s flower of nectar, their entry holes side in a given field tend to be unambiguously on only its left or right hand). The degree of organization in the geographic patterns that emerge signifies the level of positive SA. Such SA resulted in Heckscher discovering a new species of firefly in 2013. This type of SA is the basis of Nobel winner Schelling’s models of segregation, and The Economist (20 April 2013, p. 16) stating that regardless of class in Britain, geographical clusters of voters act like “political opinions derive from the air people breathe.” Moderate positive SA characterizes slider puzzles, magnetic sculpture toys, and television pictures. Meanwhile, negative SA relates to spatial patterns of competition. Although this nature of SA is rarely encountered in practice, it is illustrated by the Grand Prairie Independent School District’s (GPISD) attempts to increase the amount of money it receives from the state of Texas by holding annual events to attract students from surrounding school districts to attend its schools (Dallas Morning News, 9 January 2014); GPISD attempts to increase its enrollments by decreasing enrollments in its neighboring school districts. A timeline for the evolution of the SA concept helps establish its historically relevant literature. In the early 1800s, Laplace recognized autocorrelation—albeit serial for time series—by acknowledging that between day variations in barometric pressure readings tend to be much greater than within day readings. From 1914 to 1935, spatial series observational correlations were recognized by Student, then Yule, then both Stephan and Neprash, and then Fisher. This recognition set the stage for establishing the concept of SA. Moran and Geary did so in the early 1950s. In parallel, writing in French, Matheron and Krige also did so within the context of geostatistics. Next, more formal models of SA were formulated, first by Whittle, then by Mead, and finally by Cliff and Ord, whose numerous publications popularized the concept in the 1970s. One outcome of Cliff and Ord’s work was the coining of the phrase spatial econometrics in 1979 by Paelinck and Klaassen. Finally, as the century drew to a close, Griffith established the foundation of eigenvector spatial filtering, which extends SA analysis to the entire family of non-normal random variables.


Author(s):  
Kevin E. Henrickson ◽  
Wesley W. Wilson

A model of transportation demand and the interrelated supply decisions of agricultural shippers is derived over a geographic space. These shippers use prices to procure grain and to make output, mode, and market decisions. Each decision is affected by the characteristics of the region and the level of spatial competition between the shipper and its rivals. All of these factors are integrated into the model of derived demand and spatial competition. The model is applied to data that represent barge elevators on the upper Mississippi and Illinois Rivers to estimate transportation demands and gathering areas. The results provide demand elasticity estimates for annual volumes between −1.4 and −1.9, which are sizably larger than previous estimates of similar traffic. The results also indicate that inbound transportation rates to the barge shipper as well as distance to the nearest competitor have a significant influence on annual volumes. A second model, explaining the size of the market areas of elevators, is also estimated. The rates of alternative modes that compete for barge traffic as well as distance to the nearest competitor influence market areas. The results provide for a strong argument that transportation demands are elastic and that spatial market areas vary substantially with transportation rates.


IQTISHODUNA ◽  
2016 ◽  
Vol 11 (2) ◽  
pp. 90-101
Author(s):  
Mohamad Bastomi ◽  
Meldona Meldona

All of investors value that financial distress conditions and the proportion of financial capitalstructure have effect to become fluctuation of stocks price. Therefore, the aim of this research is to know theeffects of financial distress to the stocks’ price either direct or indirect through financial capital structure. Thisresearch uses descriptive quantitative approach by using sector of service listed in BEI year 2009–2013 as theobject. The sampling method used is purposive sampling. There are 23 companies that have categorized assample. The method that used to predict financial distress is z-score Altman method, whereas to analyst thehypothesis is used path analysis through SPSS TEST by consideration of test classic assumption. Based onthe result the research shows that financial distress gives negative impact to the stocks’ price. Financialdistress caused by capital loss so decrease work of money that effect to the decrease stocks price. It might becaused of using financial structure proportion doesn’t cause business risk that will emerge financial Distressuntil it doesn’t moving stocks price. From Altman z-score prediction shows from 23 companies 10 of it (APOL,BIPP BLTA, BTEL, FMII, HITS, IATA, LIMAS, RIMO, and TKGA) include on financial distress category.Meanwhile, 5 companies (BHIT, BKDP BMSR, OKAS, and ZBRA) include on gray area. Furthermore 8 companies(ASIA, BNBR, CENT, ELTY, LCGP, META, TRIL, and TRUB) include in save condition.


2006 ◽  
Vol 3 (2) ◽  
pp. 31-34
Author(s):  
Dino Falaschetti ◽  
Michael J. Orlando

Economists tend to agree that the recent cutting of US dividends taxes will encourage investment and reduce financial distress. In addition to creating these “benefits,” however, the tax cut can also increase governance costs. For example, by removing a bias for leveraged capital structures, the tax cut foregoes debt’s superiority on at least three dimensions: Evaluating and monitoring demanders of financial capital; Constraining managerial agents’ from opportunistically employing capital market proceeds; and Encouraging non-financial stakeholders (e.g., employees, suppliers) to make firm-specific investments.


Author(s):  
Christian Mandl ◽  
Stefan Minner

Problem definition: We study a practice-motivated multiperiod stochastic commodity procurement problem under price uncertainty with forward and spot purchase options. Existing approaches are based on parametric price models, which inevitably involve price model misspecification and generalization error. Academic/practical relevance: We propose a nonparametric, data-driven approach (DDA) that is consistent with the optimal procurement policy structure but without requiring the a priori specification and estimation of stochastic price processes. In addition to historical prices, DDA is able to leverage real-time feature data, such as economic indicators, in solving the problem. Methodology: This paper provides a framework for prescriptive analytics in dynamic commodity procurement, with optimal purchase policies learned directly from data as functions of features, via mixed integer linear programming (MILP) under cost minimization objectives. Hence, DDA focuses on optimal decisions rather than optimal predictions. Furthermore, we combine optimization with regularization from machine learning (ML) to extract decision-relevant data from noise. Results: Based on numerical experiments and empirical data, we show that there is a significant value of feature data for commodity procurement when procurement policy parameters are learned as functions of features. However, overfitting deteriorates the performance of data-driven solutions, which asks for ML extensions to improve out-of-sample generalization. Compared with an internal best-practice benchmark, DDA generates savings of on average 9.1 million euros per annum (4.33%) for 10 years of backtesting. Managerial implications: A practical benefit of DDA is that it yields simple but optimally structured decision rules that are easy to interpret and easy to operationalize. Furthermore, DDA is generalizable and applicable to many other procurement settings.


2021 ◽  
Vol 5 (3) ◽  
pp. 127-138
Author(s):  
Valentin Gallart-Camahort ◽  
Luis Callarisa-Fiol ◽  
Javier Sanchez-Garcia

Strong brand equity is important for any business. Although the concept of brand equity has been studied in various fields, its analysis has not been as extensive in the retail sector. On the other hand, the analysis of engagement is gaining more importance in recent times. Customer engagement is an increasingly relevant and researched topic. However, studies that relate this concept to retail trade are not common. The present work aims to analyze the effect of engagement on the different components of retail brand equity. The a priori model considers the previous research and the proposed hypotheses. A Confirmatory Factor Analysis is performed, based on the data obtained through a structured questionnaire with closed questions and a 5-point Likert-type response scale. The study sample consists of 623 respondents. This study involved a conceptual model that includes the brand equity dimensions (awareness, perceived quality, image, perceived value, and loyalty) to gain the research goal. The hypothesized causal model relates the variables that make up brand equity and the engagement influence on them. The empirical analysis results showed that customer engagement positively affects all the components of the brand equity retailer (except its image), mainly concerning retailer awareness, loyalty, and perceived quality. The authors concluded that retailer awareness, loyalty towards the retailer, and retailer perceived quality are influenced by engagement. Consequently, it would be necessary for the retailer manager to pay special attention to creating actions that contribute to customers' engagement in the different areas of interaction with them, both online and at the physical point of sale. For future studies, the geographic space should be expanded, considering different regions or even countries and observing possible differences in the behavior of the interviewees.


Author(s):  
D. E. Luzzi ◽  
L. D. Marks ◽  
M. I. Buckett

As the HREM becomes increasingly used for the study of dynamic localized phenomena, the development of techniques to recover the desired information from a real image is important. Often, the important features are not strongly scattering in comparison to the matrix material in addition to being masked by statistical and amorphous noise. The desired information will usually involve the accurate knowledge of the position and intensity of the contrast. In order to decipher the desired information from a complex image, cross-correlation (xcf) techniques can be utilized. Unlike other image processing methods which rely on data massaging (e.g. high/low pass filtering or Fourier filtering), the cross-correlation method is a rigorous data reduction technique with no a priori assumptions.We have examined basic cross-correlation procedures using images of discrete gaussian peaks and have developed an iterative procedure to greatly enhance the capabilities of these techniques when the contrast from the peaks overlap.


Author(s):  
H.S. von Harrach ◽  
D.E. Jesson ◽  
S.J. Pennycook

Phase contrast TEM has been the leading technique for high resolution imaging of materials for many years, whilst STEM has been the principal method for high-resolution microanalysis. However, it was demonstrated many years ago that low angle dark-field STEM imaging is a priori capable of almost 50% higher point resolution than coherent bright-field imaging (i.e. phase contrast TEM or STEM). This advantage was not exploited until Pennycook developed the high-angle annular dark-field (ADF) technique which can provide an incoherent image showing both high image resolution and atomic number contrast.This paper describes the design and first results of a 300kV field-emission STEM (VG Microscopes HB603U) which has improved ADF STEM image resolution towards the 1 angstrom target. The instrument uses a cold field-emission gun, generating a 300 kV beam of up to 1 μA from an 11-stage accelerator. The beam is focussed on to the specimen by two condensers and a condenser-objective lens with a spherical aberration coefficient of 1.0 mm.


Sign in / Sign up

Export Citation Format

Share Document