scholarly journals An estimation of cost-based market liquidity from daily high, low and close prices

2020 ◽  
Vol 6 (2) ◽  
pp. 1-11
Author(s):  
J. Saleemi

In the literature of asset pricing, this paper introduces a new method to estimate the cost-based market liquidity (CBML), that is, the bid-ask spread. The proposed model of spread proxy positively correlates with the examined low-frequency spread proxies for a larger dataset. The introduced approach provides potential implications in important aspects. Unlike in the Roll bid-ask spread model and the CHL bid-ask estimator, the CBML model consistently estimates market liquidity and trading cost for the entire dataset. Additionally, the CBML estimator steadily measures positive spreads, unlike in the CS bid-ask spread model. The construction of the proposed approach is not computationally intensive and can be considered for distinct studies at both market and firm levels.

2016 ◽  
Vol 10 (10) ◽  
pp. 133
Author(s):  
Mohammad Ali Nasiri Khalili ◽  
Mostafa Kafaei Razavi ◽  
Morteza Kafaee Razavi

Items supplies planning of a logistic system is one of the major issue in operations research. In this article the aim is to determine how much of each item per month from each supplier logistics system requirements must be provided. To do this, a novel multi objective mixed integer programming mathematical model is offered for the first time. Since in logistics system, delivery on time is very important, the first objective is minimization of time in delivery on time costs (including lack and maintenance costs) and the cost of purchasing logistics system. The second objective function is minimization of the transportation supplier costs. Solving the mathematical model shows how to use the Multiple Objective Decision Making (MODM) can provide the ensuring policy and transportation logistics needed items. This model is solved with CPLEX and computational results show the effectiveness of the proposed model.


2007 ◽  
Vol 38 (7) ◽  
pp. 11-17
Author(s):  
Ronald M. Aarts

Conventionally, the ultimate goal in loudspeaker design has been to obtain a flat frequency response over a specified frequency range. This can be achieved by carefully selecting the main loudspeaker parameters such as the enclosure volume, the cone diameter, the moving mass and the very crucial “force factor”. For loudspeakers in small cabinets the results of this design procedure appear to be quite inefficient, especially at low frequencies. This paper describes a new solution to this problem. It consists of the combination of a highly non-linear preprocessing of the audio signal and the use of a so called low-force-factor loudspeaker. This combination yields a strongly increased efficiency, at least over a limited frequency range, at the cost of a somewhat altered sound quality. An analytically tractable optimality criterion has been defined and has been verified by the design of an experimental loudspeaker. This has a much higher efficiency and a higher sensitivity than current low-frequency loudspeakers, while its cabinet can be much smaller.


BIOMATH ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 2106147
Author(s):  
Debkumar Pal ◽  
D Ghosh ◽  
P K Santra ◽  
G S Mahapatra

This paper presents the current situation and how to minimize its effect in India through a mathematical model of infectious Coronavirus disease (COVID-19). This model consists of six compartments to population classes consisting of susceptible, exposed, home quarantined, government quarantined, infected individuals in treatment, and recovered class. The basic reproduction number is calculated, and the stabilities of the proposed model at the disease-free equilibrium and endemic equilibrium are observed. The next crucial treatment control of the Covid-19 epidemic model is presented in India's situation. An objective function is considered by incorporating the optimal infected individuals and the cost of necessary treatment. Finally, optimal control is achieved that minimizes our anticipated objective function. Numerical observations are presented utilizing MATLAB software to demonstrate the consistency of present-day representation from a realistic standpoint.


2018 ◽  
Vol 13 (3) ◽  
pp. 244
Author(s):  
Laura Broccardo ◽  
Luisa Tibiletti ◽  
Pertti Vilpas

This study investigates how balancing internal and external financing sources can create economic value. We set a financial scorecard, consisting of the Cost of Debt (COD), Return on Investment (ROI), and the Cost of Equity (COE). We show that COE should be a cap for COD and a floor for ROI in order to increase the Net Present Value at Weighted Average Cost of Capital and the Adjusted Present Value of the levered investment. However, leverage should be carefully monitored if COD and ROI go off the grid. Situations where leverage has the opposite effect on value creation and the Equity Internal Rate of Return are also discussed. Illustrative examples are given. The proposed model aims to help corporate management in financial decisions.


Climate Law ◽  
2014 ◽  
Vol 4 (3-4) ◽  
pp. 301-326 ◽  
Author(s):  
Ismo Pölönen

The article examines the key features and functions of the proposed Finnish Climate Change Act (fcca). It also analyses the legal implications of the Act and the qualities and factors which may limit its effectiveness. The paper argues that, despite its weak legal implications, the fcca would provide the regulatory preconditions for higher-quality climate policy-making in Finland, and it has the capacity to play an important role in national climate policy. The fcca would deliver regulatory foundations for systematic and integrated climate policy-making, also enabling wide public scrutiny. The proposed model leaves room for manifold climate-policy choices in varying societal and economical contexts. The cost of dynamic features is the relalow predictability in terms of sectorial paths on emission reductions. Another relevant challenge relates to the intended preparation of overlapping mid-term energy and climate plans with instruments of the fcca.


Author(s):  
James Farrow

ABSTRACT ObjectivesThe SA.NT DataLink Next Generation Linkage Management System (NGLMS) stores linked data in the form of a graph (in the computer science sense) comprised of nodes (records) and edges (record relationships or similarities). This permits efficient pre-clustering techniques based on transitive closure to form groups of records which relate to the same individual (or other selection criteria). ApproachOnly information known (or at least highly likely) to be relevant is extracted from the graph as superclusters. This operation is computationally inexpensive when the underlying information is stored as a graph and may be able to be done on-the-fly for typical clusters. More computationally intensive analysis and/or further clustering may then be performed on this smaller subgraph. Canopy clustering and using blocking used to reduce pairwise comparisons are expressions of the same type of approach. ResultsSubclusters for manual review based on transitive closure are typically computationally inexpensive enough to extract from the NGLMS that they are extracted on-demand during manual clerical review activities. There is no necessity to pre-calculate these clusters. Once extracted further analysis is undertaken on these smaller data groupings for visualisation and presentation for review and quality analysis. More computationally expensive techniques can be used at this point to prepare data for visualisation or provide hints to manual reviewers. 
Extracting high-recall groups of data records for review but providing them to reviews grouped further into high precision groups as the result of a second pass has resulted in a reduction of the time taken for clerical reviewers at SANT DataLink to manual review a group by 30–40%. The reviewers are able to manipulate whole groups of related records at once rather than individual records. ConclusionPre-clustering reduces the computational cost associated with higher order clustering and analysis algorithms. Algorithms which scale by n^2 (or more) are typical in comparison scenarios. By breaking the problem into pieces the computational cost can be reduced. Typically breaking a problem into many pieces reduces the cost in proportion to the number of pieces the problem can be broken into. This cost reduction can make techniques possible which would otherwise be computationally prohibitive.


1982 ◽  
Vol 72 (2) ◽  
pp. 643-661
Author(s):  
S. Shyam Sunder ◽  
Jerome J. Connor

Abstract A new procedure for routinely processing strong-motion earthquake signals using state-of-the-art filter design and implementation techniques is presented. The model, shown to be both accuratet and efficient, is sufficiently flexible so that the signal sampling period and filter parameters can be easily varied. A comparison of results from the existing United States model (Trifunac and Lee, 1973) and the proposed model show significant differences in the ground motion and response spectrum characteristics for the same set of filter limits. Drifts in integrated velocity and displacement characteristics and theoretically incorrect asymptotic behavior of response spectrum curves arising out of the existing United States processing scheme have been eliminated. In addition to the importance of appropriately selecting a low-frequency limit for band-pass filtering the signals, this work demonstrates the sensitivity of the acceleration trace to the particular choice of a high-frequency limit.


Sign in / Sign up

Export Citation Format

Share Document