Verizon Uses Advanced Analytics to Rationalize Its Tail Spend Suppliers

2020 ◽  
Vol 50 (3) ◽  
pp. 197-211
Author(s):  
Hossein Abdollahnejadbarough ◽  
Kalyan S Mupparaju ◽  
Sagar Shah ◽  
Colin P Golding ◽  
Abelardo C Leites ◽  
...  

The Verizon Global Supply Chain organization currently manages thousands of active supplier contracts. These contracts account for several billion dollars of annualized Verizon spend. Managing thousands of suppliers, controlling spend, and achieving the best price per unit (PPU) through negotiations are costly and labor-intensive tasks handled by Verizon strategic sourcing teams. Verizon engages thousands of suppliers for many reasons—best price, diversity, short-term requirements, and so forth. Whereas managing a few larger spend suppliers can be done manually by dedicated sourcing managers, managing thousands of smaller suppliers at the tail spend is challenging, can often introduce risk, and can be expensive. At Verizon, a unique blend of descriptive, predictive, and prescriptive analytics, as well as Verizon-specific sourcing acumen was leveraged to tackle this problem and rationalize Verizon’s tail spend suppliers. Through the creative application of operations research, machine learning, text mining, natural language processing, and artificial intelligence, Verizon reduced spend by millions of dollars and acquired the lowest PPU for the sourced products and services. Other benefits Verizon realized were centralized and transparent contract and supplier relationship management, overhead cost reduction, decreased contract execution lead time, and service quality improvement for Verizon’s strategic sourcing teams.

2021 ◽  
pp. 1-10
Author(s):  
Hye-Jeong Song ◽  
Tak-Sung Heo ◽  
Jong-Dae Kim ◽  
Chan-Young Park ◽  
Yu-Seop Kim

Sentence similarity evaluation is a significant task used in machine translation, classification, and information extraction in the field of natural language processing. When two sentences are given, an accurate judgment should be made whether the meaning of the sentences is equivalent even if the words and contexts of the sentences are different. To this end, existing studies have measured the similarity of sentences by focusing on the analysis of words, morphemes, and letters. To measure sentence similarity, this study uses Sent2Vec, a sentence embedding, as well as morpheme word embedding. Vectors representing words are input to the 1-dimension convolutional neural network (1D-CNN) with various sizes of kernels and bidirectional long short-term memory (Bi-LSTM). Self-attention is applied to the features transformed through Bi-LSTM. Subsequently, vectors undergoing 1D-CNN and self-attention are converted through global max pooling and global average pooling to extract specific values, respectively. The vectors generated through the above process are concatenated to the vector generated through Sent2Vec and are represented as a single vector. The vector is input to softmax layer, and finally, the similarity between the two sentences is determined. The proposed model can improve the accuracy by up to 5.42% point compared with the conventional sentence similarity estimation models.


2017 ◽  
Vol 6 (3) ◽  
pp. 39
Author(s):  
Paul A. Griffin ◽  
Mohammedi Padaria

The purpose of this paper is to examine how firms’ information landscape has changed in recent years and why this could be problematic for those engaged in financial analysis and equity valuation. Our central contention is that two main forces of change – lower information costs and faster information processing – have completely disrupted the traditional concept of financial analysis. In response to this disruption, financial analysis will now increasingly take the form of “reactive valuation.” In addition to examining our main contention, we introduce a new term into the literature, called “reactive valuation,” which we define as the ultra short-term valuation of an equity, lasting from a few seconds to a few hours, based on information primarily published through social media channels. It may be later corroborated by factually based information or remain unsubstantiated. It may or may not be from an authoritative source. It also may not relate clearly or directly to the valuation of the underlying asset. However, based mostly on the tools of artificial intelligence and natural language processing, “reactive valuation” will invariably provide an opportunity for statistical arbitrage during the short time it takes for the market to digest the information. Financial analysts who survive these two forces of change will have detailed knowledge of this new form of financial analysis.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Kazi Nabiul Alam ◽  
Md Shakib Khan ◽  
Abdur Rab Dhruba ◽  
Mohammad Monirujjaman Khan ◽  
Jehad F. Al-Amri ◽  
...  

The COVID-19 pandemic has had a devastating effect on many people, creating severe anxiety, fear, and complicated feelings or emotions. After the initiation of vaccinations against coronavirus, people’s feelings have become more diverse and complex. Our aim is to understand and unravel their sentiments in this research using deep learning techniques. Social media is currently the best way to express feelings and emotions, and with the help of Twitter, one can have a better idea of what is trending and going on in people’s minds. Our motivation for this research was to understand the diverse sentiments of people regarding the vaccination process. In this research, the timeline of the collected tweets was from December 21 to July21. The tweets contained information about the most common vaccines available recently from across the world. The sentiments of people regarding vaccines of all sorts were assessed using the natural language processing (NLP) tool, Valence Aware Dictionary for sEntiment Reasoner (VADER). Initializing the polarities of the obtained sentiments into three groups (positive, negative, and neutral) helped us visualize the overall scenario; our findings included 33.96% positive, 17.55% negative, and 48.49% neutral responses. In addition, we included our analysis of the timeline of the tweets in this research, as sentiments fluctuated over time. A recurrent neural network- (RNN-) oriented architecture, including long short-term memory (LSTM) and bidirectional LSTM (Bi-LSTM), was used to assess the performance of the predictive models, with LSTM achieving an accuracy of 90.59% and Bi-LSTM achieving 90.83%. Other performance metrics such as precision,, F1-score, and a confusion matrix were also used to validate our models and findings more effectively. This study improves understanding of the public’s opinion on COVID-19 vaccines and supports the aim of eradicating coronavirus from the world.


2021 ◽  
Author(s):  
Carlos Velasco-Forero ◽  
Jayaram Pudashine ◽  
Mark Curtis ◽  
Alan Seed

<div> <p>Short-term precipitation forecast plays a vital role for minimizing the adverse effects of heavy precipitation events such as flash flooding.  Radar rainfall nowcasting techniques based on statistical extrapolations are used to overcome current limitations of precipitation forecasts from numerical weather models, as they provide high spatial and temporal resolutions forecasts within minutes of the observation time. Among various algorithms, the Short-Term Ensemble Prediction System (STEPS) provides rainfall fields nowcasts in a probabilistic sense by accounting the uncertainty in the precipitation forecasts by means of ensembles, with spatial and temporal characteristic very similar to those in the observed radar rainfall fields. The Australian Bureau of Meteorology uses STEPS to generate ensembles of forecast rainfall ensembles in real-time from its extensive weather radar network. </p> </div><div> <p>In this study, results of a large probabilistic verification exercise to a new version of STEPS (hereafter named STEPS-3) are reported. An extensive dataset of more than 47000 individual 5-minute radar rainfall fields (the equivalent of more than 163 days of rain) from ten weather radars across Australia (covering tropical to mid-latitude regions) were used to generate (and verify) 96-member rainfall ensembles nowcasts with up to a 90-minute lead time. STEPS-3 was found to be more than 15-times faster in delivering results compared with previous version of STEPS and an open-source algorithm called pySTEPS. Interestingly, significant variations were observed in the quality of predictions and verification results from one radar to other, from one event to other, depending on the characteristics and location of the radar, nature of the rainfall event, accumulation threshold and lead time. For example, CRPS and RMSE of ensembles of 5-min rainfall forecasts for radars located in mid-latitude regions are better (lower) than those ones from radars located in tropical areas for all lead-times. Also, rainfall fields from S-band radars seem to produce rainfall forecasts able to successfully identify extreme rainfall events for lead times up to 10 minutes longer than those produced using C-band radar datasets for the same rain rate thresholds. Some details of the new STEPS-3 version, case studies and examples of the verification results will be presented. </p> </div>


Author(s):  
Prashant Jindal ◽  
Anjana Solanki

This paper investigates the coordination issue in a decentralized supply chain having a vendor and a buyer for a defective product. The authors develop two inventory models with controllable lead time under service level constraint. The first one is propose under decentralized mode based on the Stackelberg model, the other one is propose under centralized mode of the integrated supply chain. Ordering cost reduction is also including as a decision variable along with shipping quantity, lead time and number of shipments. Computational findings using the software Matlab 7.0 are provided to find the optimal solution. The results of numerical examples show that centralized mode is better than that of decentralized mode, and to induce both vendor and buyer for coordination, proposed cost allocation model is effective. The authors also numerically investigate the effects of backorder parameter on the optimal solutions. Benefit of ordering cost reduction in both models is also provided.


2018 ◽  
Vol 10 (11) ◽  
pp. 113 ◽  
Author(s):  
Yue Li ◽  
Xutao Wang ◽  
Pengjian Xu

Text classification is of importance in natural language processing, as the massive text information containing huge amounts of value needs to be classified into different categories for further use. In order to better classify text, our paper tries to build a deep learning model which achieves better classification results in Chinese text than those of other researchers’ models. After comparing different methods, long short-term memory (LSTM) and convolutional neural network (CNN) methods were selected as deep learning methods to classify Chinese text. LSTM is a special kind of recurrent neural network (RNN), which is capable of processing serialized information through its recurrent structure. By contrast, CNN has shown its ability to extract features from visual imagery. Therefore, two layers of LSTM and one layer of CNN were integrated to our new model: the BLSTM-C model (BLSTM stands for bi-directional long short-term memory while C stands for CNN.) LSTM was responsible for obtaining a sequence output based on past and future contexts, which was then input to the convolutional layer for extracting features. In our experiments, the proposed BLSTM-C model was evaluated in several ways. In the results, the model exhibited remarkable performance in text classification, especially in Chinese texts.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1290 ◽  
Author(s):  
Rahman ◽  
Siddiqui

Abstractive text summarization that generates a summary by paraphrasing a long text remains an open significant problem for natural language processing. In this paper, we present an abstractive text summarization model, multi-layered attentional peephole convolutional LSTM (long short-term memory) (MAPCoL) that automatically generates a summary from a long text. We optimize parameters of MAPCoL using central composite design (CCD) in combination with the response surface methodology (RSM), which gives the highest accuracy in terms of summary generation. We record the accuracy of our model (MAPCoL) on a CNN/DailyMail dataset. We perform a comparative analysis of the accuracy of MAPCoL with that of the state-of-the-art models in different experimental settings. The MAPCoL also outperforms the traditional LSTM-based models in respect of semantic coherence in the output summary.


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 681 ◽  
Author(s):  
Praveen Edward James ◽  
Hou Kit Mun ◽  
Chockalingam Aravind Vaithilingam

The purpose of this work is to develop a spoken language processing system for smart device troubleshooting using human-machine interaction. This system combines a software Bidirectional Long Short Term Memory Cell (BLSTM)-based speech recognizer and a hardware LSTM-based language processor for Natural Language Processing (NLP) using the serial RS232 interface. Mel Frequency Cepstral Coefficient (MFCC)-based feature vectors from the speech signal are directly input into a BLSTM network. A dropout layer is added to the BLSTM layer to reduce over-fitting and improve robustness. The speech recognition component is a combination of an acoustic modeler, pronunciation dictionary, and a BLSTM network for generating query text, and executes in real time with an 81.5% Word Error Rate (WER) and average training time of 45 s. The language processor comprises a vectorizer, lookup dictionary, key encoder, Long Short Term Memory Cell (LSTM)-based training and prediction network, and dialogue manager, and transforms query intent to generate response text with a processing time of 0.59 s, 5% hardware utilization, and an F1 score of 95.2%. The proposed system has a 4.17% decrease in accuracy compared with existing systems. The existing systems use parallel processing and high-speed cache memories to perform additional training, which improves the accuracy. However, the performance of the language processor has a 36.7% decrease in processing time and 50% decrease in hardware utilization, making it suitable for troubleshooting smart devices.


Sign in / Sign up

Export Citation Format

Share Document