scholarly journals Performance Comparison between Two Established Microgrid Planning MILP Methodologies Tested On 13 Microgrid Projects

Energies ◽  
2020 ◽  
Vol 13 (17) ◽  
pp. 4460 ◽  
Author(s):  
Michael Stadler ◽  
Zack Pecenak ◽  
Patrick Mathiesen ◽  
Kelsey Fahy ◽  
Jan Kleissl

Mixed Integer Linear Programming (MILP) optimization algorithms provide accurate and clear solutions for Microgrid and Distributed Energy Resources projects. Full-scale optimization approaches optimize all time-steps of data sets (e.g., 8760 time-step and higher resolutions), incurring extreme and unpredictable run-times, often prohibiting such approaches for effective Microgrid designs. To reduce run-times down-sampling approaches exist. Given that the literature evaluates the full-scale and down-sampling approaches only for limited numbers of case studies, there is a lack of a more comprehensive study involving multiple Microgrids. This paper closes this gap by comparing results and run-times of a full-scale 8760 h time-series MILP to a peak preserving day-type MILP for 13 real Microgrid projects. The day-type approach reduces the computational time between 85% and almost 100% (from 2 h computational time to less than 1 min). At the same time the day-type approach keeps the objective function (OF) differences below 1.5% for 77% of the Microgrids. The other cases show OF differences between 6% and 13%, which can be reduced to 1.5% or less by applying a two-stage hybrid approach that designs the Microgrid based on down-sampled data and then performs a full-scale dispatch algorithm. This two stage approach results in 20–99% run-time savings.

2013 ◽  
Vol 10 (4) ◽  
pp. 1531-1538
Author(s):  
Mahmoud M. Ismail ◽  
Ibrahim M. El-henawy

In this paper, a hybridization of two different swarm intelligent approaches, stochastic diffusion search, and particle swarm optimization techniques is presented  for solving integer programming problems. The hybrid implementation allows us to avoid certain drawbacks and weaknesses of each algorithm, which means that we are able to find an optimal solution in an acceptable computational time. Our hybrid implementation allows the IP algorithm to reach the optimal solution in a considerably shorter time than is needed to solve the model using the entire dataset directly within the model. Our hybrid approach outperforms the results obtained by each technique separately. It is able to find the optimal solution in a shorter time than each technique on its own, and the results are highly competitive with the state-of-the-art in large-scale optimization. Furthermore, according to our results, combining the PSO with SDS approach for solving IP problems appears to be an interesting research area in combinatorial optimization. 


Author(s):  
Ebrahim Asadi-Gangraj ◽  
Sina Nayeri

Due to increasing population, increasing number of vehicles as well as environmental pollution, planning vehicles efficiently one of important problems nowadays. This article proposes a Multi-Objective Mixed Integer Programming (MOMIP) model for the vehicle-routing problem with time windows, driver-specific times and vehicles-specific capacities (VRPTDV), a variant of the classical VRPT that uses driver-specific travel and service times and vehicles-specific capacity to model the familiarity of the different drivers with the customers to visit. The first objective function aims to minimize traveled distance and the second objective function minimizing working duration. Since the problem is NP-hard, optimal solution for the instances of realistic size cannot be obtained within a reasonable amount of computational time using exact solution approaches. Hence, the hybrid approach based on LP metric method and genetic algorithm is proposed to solve the given problem.


Author(s):  
Sathiyapriya Krishnamoorthy ◽  
Sudha Sadasivam G. ◽  
Rajalakshmi M.

Explosion of data analysis techniques facilitate organizations to publish microdata about individuals. While the released data sets provide valuable information to researchers, it is possible to infer sensitive information from the published non-sensitive data using association rule mining. An association rule is characterized as sensitive if its confidence is above disclosure threshold. These sensitive rules should be made uninteresting before releasing the dataset publicly. This is done by modifying the data that support the sensitive rules, so that the confidence of these sensitive rules is reduced below disclosure threshold. The main goal of the proposed system is to hide a set of sensitive association rules by perturbing the quantitative data that contains sensitive knowledge using PSO and hybrid PSO-GA with minimum side effects like lost rules, ghost rules. The performance of PSO and Hybrid PSO-GA approach in effectively hiding fuzzy association rule is also compared. Experimental results demonstrate that hybrid approach is efficient in terms of lost rules, number of modifications, hiding failure.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Fatemeh Daneshamooz ◽  
Parviz Fattahi ◽  
Seyed Mohammad Hassan Hosseini

Purpose Two-stage production systems including a processing shop and an assembly stage are widely used in various manufacturing industries. These two stages are usually studied independently which may not lead to ideal results. This paper aims to deal with a two-stage production system including a job shop and an assembly stage. Design/methodology/approach Some exact methods are proposed based on branch and bound (B&B) approach to minimize the total completion time of products. As B&B approaches are usually time-consuming, three efficient lower bounds are developed for the problem and variable neighborhood search is used to provide proper upper bound of the solution in each branch. In addition, to create branches and search new nodes, two strategies are applied including the best-first search and the depth-first search (DFS). Another feature of the proposed algorithms is that the search space is reduced by releasing the precedence constraint. In this case, the problem becomes equivalent to a parallel machine scheduling problem, and the redundant branches that do not consider the precedence constraint are removed. Therefore, the number of nodes and computational time are significantly reduced without eliminating the optimal solution. Findings Some numerical examples are used to evaluate the performance of the proposed methods. Comparison result to mathematical model (mixed-integer linear programming) validates the performance accuracy and efficiency of the proposed methods. In addition, computational results indicate the superiority of the DFS strategy with regard to CPU time. Originality/value Studies about the scheduling problems for two-stage production systems including job shop followed by an assembly stage traditionally present approximate method and metaheuristic algorithms to solve the problem. This is the first study that introduces exact methods based on (B&B) approach.


1994 ◽  
Vol 29 (1-2) ◽  
pp. 53-61
Author(s):  
Ben Chie Yen

Urban drainage models utilize hydraulics of different levels. Developing or selecting a model appropriate to a particular project is not an easy task. Not knowing the hydraulic principles and numerical techniques used in an existing model, users often misuse and abuse the model. Hydraulically, the use of the Saint-Venant equations is not always necessary. In many cases the kinematic wave equation is inadequate because of the backwater effect, whereas in designing sewers, often Manning's formula is adequate. The flow travel time provides a guide in selecting the computational time step At, which in turn, together with flow unsteadiness, helps in the selection of steady or unsteady flow routing. Often the noninertia model is the appropriate model for unsteady flow routing, whereas delivery curves are very useful for stepwise steady nonuniform flow routing and for determination of channel capacity.


Author(s):  
Tannistha Pal

Images captured in severe atmospheric catastrophe especially in fog critically degrade the quality of an image and thereby reduces the visibility of an image which in turn affects several computer vision applications like visual surveillance detection, intelligent vehicles, remote sensing, etc. Thus acquiring clear vision is the prime requirement of any image. In the last few years, many approaches have been made towards solving this problem. In this article, a comparative analysis has been made on different existing image defogging algorithms and then a technique has been proposed for image defogging based on dark channel prior strategy. Experimental results show that the proposed method shows efficient results by significantly improving the visual effects of images in foggy weather. Also computational time of the existing techniques are much higher which has been overcame in this paper by using the proposed method. Qualitative assessment evaluation is performed on both benchmark and real time data sets for determining theefficacy of the technique used. Finally, the whole work is concluded with its relative advantages and shortcomings.


2021 ◽  
pp. 100093
Author(s):  
Ico Broekhuizen ◽  
Santiago Sandoval ◽  
Hanxue Gao ◽  
Felipe Mendez-Rios ◽  
Günther Leonhardt ◽  
...  

Top ◽  
2021 ◽  
Author(s):  
Denise D. Tönissen ◽  
Joachim J. Arts ◽  
Zuo-Jun Max Shen

AbstractThis paper presents a column-and-constraint generation algorithm for two-stage stochastic programming problems. A distinctive feature of the algorithm is that it does not assume fixed recourse and as a consequence the values and dimensions of the recourse matrix can be uncertain. The proposed algorithm contains multi-cut (partial) Benders decomposition and the deterministic equivalent model as special cases and can be used to trade-off computational speed and memory requirements. The algorithm outperforms multi-cut (partial) Benders decomposition in computational time and the deterministic equivalent model in memory requirements for a maintenance location routing problem. In addition, for instances with a large number of scenarios, the algorithm outperforms the deterministic equivalent model in both computational time and memory requirements. Furthermore, we present an adaptive relative tolerance for instances for which the solution time of the master problem is the bottleneck and the slave problems can be solved relatively efficiently. The adaptive relative tolerance is large in early iterations and converges to zero for the final iteration(s) of the algorithm. The combination of this relative adaptive tolerance with the proposed algorithm decreases the computational time of our instances even further.


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2181
Author(s):  
Rafik Nafkha ◽  
Tomasz Ząbkowski ◽  
Krzysztof Gajowniczek

The electricity tariffs available to customers in Poland depend on the connection voltage level and contracted capacity, which reflect the customer demand profile. Therefore, before connecting to the power grid, each consumer declares the demand for maximum power. This amount, referred to as the contracted capacity, is used by the electricity provider to assign the proper connection type to the power grid, including the size of the security breaker. Maximum power is also the basis for calculating fixed charges for electricity consumption, which is controlled and metered through peak meters. If the peak demand exceeds the contracted capacity, a penalty charge is applied to the exceeded amount, which is up to ten times the basic rate. In this article, we present several solutions for entrepreneurs based on the implementation of two-stage and deep learning approaches to predict maximal load values and the moments of exceeding the contracted capacity in the short term, i.e., up to one month ahead. The forecast is further used to optimize the capacity volume to be contracted in the following month to minimize network charge for exceeding the contracted level. As confirmed experimentally with two datasets, the application of a multiple output forecast artificial neural network model and a genetic algorithm (two-stage approach) for load optimization delivers significant benefits to customers. As an alternative, the same benefit is delivered with a deep learning architecture (hybrid approach) to predict the maximal capacity demands and, simultaneously, to determine the optimal capacity contract.


Atmosphere ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 64
Author(s):  
Feng Jiang ◽  
Yaqian Qiao ◽  
Xuchu Jiang ◽  
Tianhai Tian

The randomness, nonstationarity and irregularity of air pollutant data bring difficulties to forecasting. To improve the forecast accuracy, we propose a novel hybrid approach based on two-stage decomposition embedded sample entropy, group teaching optimization algorithm (GTOA), and extreme learning machine (ELM) to forecast the concentration of particulate matter (PM10 and PM2.5). First, the improvement complementary ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) is employed to decompose the concentration data of PM10 and PM2.5 into a set of intrinsic mode functions (IMFs) with different frequencies. In addition, wavelet transform (WT) is utilized to decompose the IMFs with high frequency based on sample entropy values. Then the GTOA algorithm is used to optimize ELM. Furthermore, the GTOA-ELM is utilized to predict all the subseries. The final forecast result is obtained by ensemble of the forecast results of all subseries. To further prove the predictable performance of the hybrid approach on air pollutants, the hourly concentration data of PM2.5 and PM10 are used to make one-step-, two-step- and three-step-ahead predictions. The empirical results demonstrate that the hybrid ICEEMDAN-WT-GTOA-ELM approach has superior forecasting performance and stability over other methods. This novel method also provides an effective and efficient approach to make predictions for nonlinear, nonstationary and irregular data.


Sign in / Sign up

Export Citation Format

Share Document