Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Macroeconomic Forecasting in the Era of Big Data (eBook)

Theory and Practice

Peter Fuleky (Herausgeber)

eBook Download: PDF
2019 | 1st ed. 2020
XIII, 719 Seiten
Springer International Publishing (Verlag)
978-3-030-31150-6 (ISBN)

Lese- und Medienproben

Macroeconomic Forecasting in the Era of Big Data -
Systemvoraussetzungen
255,73 inkl. MwSt
(CHF 249,85)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

This book surveys big data tools used in macroeconomic forecasting and addresses related econometric issues, including how to capture dynamic relationships among variables; how to select parsimonious models; how to deal with model uncertainty, instability, non-stationarity, and mixed frequency data; and how to evaluate forecasts, among others. Each chapter is self-contained with references, and provides solid background information, while also reviewing the latest advances in the field. Accordingly, the book offers a valuable resource for researchers, professional forecasters, and students of quantitative economics.




Peter Fuleky is an Associate Professor of Economics with a joint appointment at the University of Hawaii Economic Research Organization (UHERO), and the Department of Economics at the University of Hawaii at Manoa. His research focuses on econometrics, time series analysis, and forecasting. He is a co-author of UHERO's quarterly forecast reports on Hawaii's economy. He obtained his Ph.D. degree in Economics at the University of Washington, USA.

Foreword 6
Preface 7
Contents 9
List of Contributors 11
Part I Introduction 14
1 Sources and Types of Big Data for Macroeconomic Forecasting 15
1.1 Understanding What's Big About Big Data 15
1.1.1 How Big is Big? 16
1.1.2 The Challenges of Big Data 17
Undocumented and Changing Data Structures 17
Need for Network Infrastructure and Distributed Computing 18
Costs and Access Limitations 19
Data Snooping, Causation, and Big Data Hubris 19
1.2 Sources of Big Data for Forecasting 20
1.2.1 Financial Market Data 21
1.2.2 E-commerce and Scanner Data 22
1.2.3 Mobile Phones 23
1.2.4 Search Data 24
1.2.5 Social Network Data 25
1.2.6 Text and Media Data 26
1.2.7 Sensors, and the Internet of Things 27
1.2.8 Transportation Data 28
1.2.9 Other Administrative Data 28
1.2.10 Other Potential Data Sources 29
1.3 Conclusion 30
References 31
Part II Capturing Dynamic Relationships 36
2 Dynamic Factor Models 37
2.1 Introduction 37
2.2 From Exact to Approximate Factor Models 38
2.2.1 Exact Factor Models 39
2.2.2 Approximate Factor Models 41
2.3 Estimation in the Time Domain 42
2.3.1 Maximum Likelihood Estimation of Small Factor Models 42
2.3.2 Principal Component Analysis of Large Approximate Factor Models 45
2.3.3 Generalized Principal Component Analysis of Large Approximate Factor Models 47
2.3.4 Two-Step and Quasi-Maximum Likelihood Estimation of Large Approximate Factor Models 48
2.3.5 Estimation of Large Approximate Factor Models with Missing Data 49
2.4 Estimation in the Frequency Domain 51
2.5 Estimating the Number of Factors 56
2.6 Forecasting with Large Dynamic Factor Models 57
2.6.1 Targeting Predictors and Other Forecasting Refinements 58
2.7 Hierarchical Dynamic Factor Models 59
2.8 Structural Breaks in Dynamic Factor Models 61
2.8.1 Markov-Switching Dynamic Factor Models 61
2.8.2 Time Varying Loadings 64
2.9 Conclusion 69
References 69
3 Factor Augmented Vector Autoregressions, Panel VARs, and Global VARs 75
3.1 Introduction 75
3.2 Modeling Relations Across Units 78
3.2.1 Panel VAR Models 78
3.2.2 Restrictions for Large-Scale Panel Models 80
Cross-Sectional Homogeneity 80
Dynamic Interdependencies 81
Static Interdependencies 81
Implementing Parametric Restrictions 82
3.2.3 Global Vector Autoregressive Models 83
The Full GVAR Model 84
3.2.4 Factor Augmented Vector Autoregressive Models 86
3.2.5 Computing Forecasts 87
3.3 Empirical Application 88
3.3.1 Data and Model Specification 88
An Illustrative Example 89
Model Specification 90
3.3.2 Evaluating Forecasting Performance 91
Performance Measures 92
3.3.3 Results 92
Overall Forecasting Performance 93
Forecasts for Individual Countries 95
3.4 Summary 97
Appendix A: Details on Prior Specification 98
References 101
4 Large Bayesian Vector Autoregressions 104
4.1 Introduction 104
4.1.1 Vector Autoregressions 105
4.1.2 Likelihood Functions 106
4.2 Priors for Large Bayesian VARs 107
4.2.1 The Minnesota Prior 107
4.2.2 The Natural Conjugate Prior 110
4.2.3 The Independent Normal and Inverse-Wishart Prior 114
4.2.4 The Stochastic Search Variable Selection Prior 116
4.3 Large Bayesian VARs with Time-Varying Volatility, Heavy Tails and Serial Dependent Errors 118
4.3.1 Common Stochastic Volatility 119
4.3.2 Non-Gaussian Errors 120
4.3.3 Serially Dependent Errors 120
4.3.4 Estimation 121
4.4 Empirical Application: Forecasting with Large Bayesian VARs 126
4.4.1 Data, Models, and Priors 127
4.4.2 Forecast Evaluation Metrics 128
4.4.3 Forecasting Results 129
4.5 Further Reading 130
Appendix A: Data 131
Appendix B: Sampling from the Matrix Normal Distribution 132
References 132
5 Volatility Forecasting in a Data Rich Environment 135
5.1 Introduction 135
5.2 Classical Tools for Volatility Forecasting: ARCH Models 137
5.2.1 Univariate GARCH Models 137
5.2.2 Multivariate GARCH Models 139
5.2.3 Dealing with Large Dimension in Multivariate Models 142
5.3 Stochastic Volatility Models 143
5.3.1 Univariate Stochastic Volatility Models 144
5.3.2 Multivariate Stochastic Volatility Models 145
5.3.3 Improvements on Classical Models 146
5.3.4 Dealing with Large Dimensional Models 149
5.4 Volatility Forecasting with High Frequency Data 152
5.4.1 Measuring Realized Variances 152
5.4.2 Realized Variance Modelling and Forecasting 153
5.4.3 Measuring and Modelling Realized Covariances 156
5.4.4 Realized (Co)variance Tools for Large Dimensional Settings 158
5.4.5 Bayesian Tools 160
5.5 Conclusion 162
References 162
6 Neural Networks 169
6.1 Introduction 169
6.1.1 Fully Connected Networks 171
6.1.2 Estimation 173
Gradient Estimation 174
6.1.3 Example: XOR Network 175
6.2 Design Considerations 178
6.2.1 Activation Functions 178
6.2.2 Model Shape 179
6.2.3 Weight Initialization 180
6.2.4 Regularization 181
6.2.5 Data Preprocessing 183
6.3 RNNs and LSTM 185
6.4 Encoder-Decoder 189
6.5 Empirical Application: Unemployment Forecasting 191
6.5.1 Data 191
6.5.2 Model Specification 192
6.5.3 Model Training 192
6.5.4 Results 193
6.6 Conclusion 194
References 194
Part III Seeking Parsimony 198
7 Penalized Time Series Regression 199
7.1 Introduction 199
7.2 Notation 200
7.3 Linear Models 200
7.3.1 Autoregressive Models 201
7.3.2 Autoregressive Distributed Lag Models 201
7.3.3 Vector Autoregressive Models 201
7.3.4 Further Models 201
7.4 Penalized Regression and Penalties 202
7.4.1 Ridge Regression 203
7.4.2 Least Absolute Shrinkage and Selection Operator (Lasso) 203
7.4.3 Adaptive Lasso 204
7.4.4 Elastic Net 205
7.4.5 Adaptive Elastic Net 206
7.4.6 Group Lasso 206
7.4.7 Other penalties and methods 207
7.5 Theoretical Properties 207
7.6 Practical Recommendations 211
7.6.1 Selection of the Penalty Parameters 211
Cross-Validation 212
Information Criteria 214
7.6.2 Computer Implementations 214
7.7 Simulations 215
7.8 Empirical Example: Inflation Forecasting 219
7.8.1 Overview 219
7.8.2 Data 224
7.8.3 Methodology 224
7.8.4 Results 226
7.9 Conclusions 232
References 233
8 Principal Component and Static Factor Analysis 235
8.1 Principal Component Analysis 235
8.1.1 Introduction 236
8.1.2 Variance Maximization 236
8.1.3 Reconstruction Error Minimization 238
8.1.4 Related Methods 240
Independent Component Analysis 240
Sparse Principal Component Analysis 242
8.2 Factor Analysis with Large Datasets 243
8.2.1 Factor Model Estimation by the Principal Component Method 244
Estimation 244
Estimate the Number of Factors 245
Rate of Convergence and Asymptotic Distribution 246
Factor-Augmented Regression 248
8.3 Regularization and Machine Learning in Factor Models 250
8.3.1 Machine Learning Methods 250
8.3.2 Model Selection Targeted at Prediction 252
Targeted Predictor 253
Partial Least Squares and Sparse Partial Least Squares 253
8.4 Policy Evaluation with Factor Model 255
8.4.1 Rubin's Model and ATT 255
8.4.2 Interactive Fixed-Effects Model 256
8.4.3 Synthetic Control Method 257
8.5 Empirical Application: Forecasting in Macroeconomics 258
8.5.1 Forecasting with Diffusion Index Method 258
Forecasting Procedures 259
Benchmark Models 259
Diffusion Index Models 260
Forecasting Performance 260
8.5.2 Forecasting Augmented with Machine Learning Methods 262
Diffusion Index Models 263
Hard Thresholding Models 263
Soft Thresholding Models 263
Empirical Findings 264
8.5.3 Forecasting with PLS and Sparse PLS 265
8.5.4 Forecasting with ICA and Sparse PCA 266
8.6 Empirical Application: Policy Evaluation with Interactive Effects 267
8.6.1 Findings Based on Monte Carlo Experiments 268
8.6.2 Empirical Findings 269
References 270
9 Subspace Methods 273
9.1 Introduction 273
9.2 Notation 275
9.3 Two Different Approaches to Macroeconomic Forecasting 275
9.3.1 Forecast Combinations 275
9.3.2 Principal Component Analysis, Diffusion Indices, Factor Models 276
9.4 Subspace Methods 277
9.4.1 Complete Subset Regression 278
Subspace Dimension 278
Weighting Schemes 279
Limitations 279
9.4.2 Random Subset Regression 280
9.4.3 Random Projection Regression 280
9.4.4 Compressed Regression 281
9.5 Empirical Applications of Subspace Methods 282
9.5.1 Macroeconomics 282
9.5.2 Microeconomics 283
9.5.3 Finance 284
9.5.4 Machine Learning 284
9.6 Theoretical Results: Forecast Accuracy 285
9.6.1 Mean Squared Forecast Error 286
Identity Covariance Matrix 287
9.6.2 Mean Squared Forecast Error Bounds 288
9.6.3 Theoretical Results in the Literature 289
9.7 Empirical Illustrations 290
9.7.1 Empirical Application: FRED-MD 290
Results 290
9.7.2 Empirical Application: Stock and Watson (2002) 291
Methods 292
Results 292
9.8 Discussion 294
References 296
10 Variable Selection and Feature Screening 298
10.1 Introduction 298
10.2 Marginal, Iterative, and Joint Feature Screening 300
10.2.1 Marginal Feature Screening 300
10.2.2 Iterative Feature Screening 301
10.2.3 Joint Feature Screening 301
10.2.4 Notations and Organization 302
10.3 Independent and Identically Distributed Data 303
10.3.1 Linear Model 303
10.3.2 Generalized Linear Model and Beyond 305
10.3.3 Nonparametric Regression Models 308
10.3.4 Model-Free Feature Screening 311
10.3.5 Feature Screening for Categorical Data 316
10.4 Time-Dependent Data 319
10.4.1 Longitudinal Data 319
10.4.2 Time Series Data 323
10.5 Survival Data 325
10.5.1 Cox Model 325
10.5.2 Feature Screening for Cox Model 326
References 329
Part IV Dealing with Model Uncertainty 332
11 Frequentist Averaging 333
11.1 Introduction 333
11.2 Background: Model Averaging 334
11.3 Forecast Combination 337
11.3.1 The Problem 339
11.3.2 Forecast Criteria 340
11.3.3 MSFE 342
The Forecast Combination Puzzle 344
Is the Simple Average Optimal? 345
11.3.4 MAD 346
11.4 Density Forecasts Combination 347
11.4.1 Optimal Weights 348
11.4.2 Theoretical Discussions 349
11.4.3 Extension: Method of Moments 352
11.5 Conclusion 356
Technical Proofs 357
References 359
12 Bayesian Model Averaging 362
12.1 Introduction 362
12.2 BMA in Economics 364
12.2.1 Jointness 365
12.2.2 Functional Uncertainty 366
12.3 Statistical Model and Methods 367
12.3.1 Model Specification 367
12.3.2 Regression Parameter Priors 368
12.3.3 Model Priors 369
Independent Model Priors 369
Dependent Model Priors 371
Dirichlet Process Model Priors 373
12.3.4 Inference 373
12.3.5 Post-Processing 375
12.4 Application 376
12.4.1 Data Description 376
12.4.2 Exploratory Data Analysis 377
12.4.3 BMA Results 379
12.4.4 Iterations Matter 385
12.4.5 Assessing the Forecasting Performance 387
12.5 Summary 388
References 389
13 Bootstrap Aggregating and Random Forest 392
13.1 Introduction 392
13.2 Bootstrap Aggregating and Its Variants 392
13.2.1 Bootstrap Aggregating (Bagging) 393
13.2.2 Sub-sampling Aggregating (Subagging) 394
13.2.3 Bootstrap Robust Aggregating (Bragging) 395
13.2.4 Out-of-Bag Error for Bagging 396
13.3 Decision Trees 397
13.3.1 The Structure of a Decision Tree 397
13.3.2 Growing a Decision Tree for Classification: ID3 and C4.5 401
13.3.3 Growing a Decision Tree for Classification: CART 410
13.3.4 Growing a Decision Tree for Regression: CART 412
13.3.5 Variable Importance in a Decision Tree 413
13.4 Random Forests 414
13.4.1 Constructing a Random Forest 414
13.4.2 Variable Importance in a Random Forest 416
13.4.3 Random Forest as the Adaptive Kernel Functions 418
13.5 Recent Developments of Random Forest 420
13.5.1 Extremely Randomized Trees 421
13.5.2 Soft Decision Tree and Forest 422
13.6 Applications of Bagging and Random Forest in Economics 428
13.6.1 Bagging in Economics 428
13.6.2 Random Forest in Economics 430
13.7 Summary 430
References 431
14 Boosting 433
14.1 Introduction 433
14.2 AdaBoost 434
14.2.1 AdaBoost Algorithm 434
14.2.2 An Example 436
14.2.3 AdaBoost: Statistical View 438
14.3 Extensions to AdaBoost Algorithms 442
14.3.1 Real AdaBoost 442
14.3.2 LogitBoost 443
14.3.3 Gentle AdaBoost 444
14.4 L2Boosting 445
14.5 Gradient Boosting 446
14.5.1 Functional Gradient Descent 447
14.5.2 Gradient Boosting Algorithm 447
14.5.3 Gradient Boosting Decision Tree 448
14.5.4 Regularization 450
Early Stopping 450
Shrinkage Method 451
14.5.5 Variable Importance 452
14.6 Recent Topics in Boosting 453
14.6.1 Boosting in Nonlinear Time Series Models 453
14.6.2 Boosting in Volatility Models 455
14.6.3 Boosting with Momentum (BOOM) 457
14.6.4 Multi-Layered Gradient Boosting Decision Tree 460
14.7 Boosting in Macroeconomics and Finance 462
14.7.1 Boosting in Predicting Recessions 462
14.7.2 Boosting Diffusion Indices 462
14.7.3 Boosting with Markov-Switching 463
14.7.4 Boosting in Financial Modeling 463
14.8 Summary 463
References 464
15 Density Forecasting 466
15.1 Introduction 466
15.2 Computing Density Forecasts 468
15.2.1 Distribution Assumption 468
15.2.2 Bootstrapping 469
A Residual-Based Bootstrapping of Density Forecasts 470
Accounting for Autocorrelated or Heteroskedastic Errors 471
A Block Wild Bootstrapping of Density Forecasts 471
15.2.3 Bayesian Inference 472
15.3 Density Combinations 473
15.3.1 Bayesian Model Averaging 475
15.3.2 Linear Opinion Pool 475
15.3.3 Generalized Opinion Pool 476
15.4 Density Forecast Evaluation 478
15.4.1 Absolute Accuracy 478
15.4.2 Relative Accuracy 480
15.4.3 Forecast Calibration 481
15.5 Monte Carlo Methods for Predictive Approximation 484
15.5.1 Accept–Reject 484
15.5.2 Importance Sampling 485
15.5.3 Metropolis-Hastings 487
15.5.4 Constructing Density Forecasting Using GPU 488
15.6 Conclusion 489
Appendix 489
References 491
16 Forecast Evaluation 496
16.1 Forecast Evaluation Using Point Predictive Accuracy Tests 496
16.1.1 Comparison of Two Non-nested Models 497
16.1.2 Comparison of Two Nested Models 500
Clark and McCracken Tests for Nested Models 500
Out-of-Sample Tests for Granger Causality 502
16.1.3 A Predictive Accuracy Test that is Consistent Against Generic Alternatives 505
16.1.4 Comparison of Multiple Models 509
A Reality Check for Data Snooping 510
A Test for Superior Predictive Ability 513
A Test Based on Sub-Sampling 514
16.2 Forecast Evaluation Using Density-Based Predictive Accuracy Tests 514
16.2.1 The Kullback–Leibler Information Criterion Approach 515
16.2.2 A Predictive Density Accuracy Test for Comparing Multiple Misspecified Models 515
16.3 Forecast Evaluation Using Density-Based Predictive Accuracy Tests that are not Loss Function Dependent: The Case of Stochastic Dominance 528
16.3.1 Robust Forecast Comparison 528
References 536
Part V Further Issues 539
17 Unit Roots and Cointegration 540
17.1 Introduction 540
17.2 General Setup 542
17.3 Transformations to Stationarity and Unit Root Pre-testing 545
17.3.1 Unit Root Test Characteristics 545
Size Distortions 545
Power and Specification Considerations 546
17.3.2 Multiple Unit Root Tests 547
Controlling Generalized Error Rates 548
Sequential Testing 549
Multivariate Bootstrap Methods 550
17.4 High-Dimensional Cointegration 552
17.4.1 Modelling Cointegration Through Factor Structures 552
Dynamic Factor Models 553
Factor-Augmented Error Correction Model 555
Estimating the Number of Factors 556
17.4.2 Sparse Models 557
Full-System Estimation 558
Single-Equation Estimation 560
17.5 Empirical Applications 562
17.5.1 Macroeconomic Forecasting Using the FRED-MD Dataset 562
Transformations to Stationarity 562
Forecast Comparison After Transformations 565
Forecast Comparisons for Cointegration Methods 567
17.5.2 Unemployment Nowcasting with Google Trends 575
Transformations to Stationarity 576
Forecast Comparison 576
17.6 Conclusion 579
References 580
18 Turning Points and Classification 584
18.1 Introduction 584
18.2 The Forecasting Problem 587
18.2.1 Real-Time Classification 587
18.2.2 Classification and Economic Data 588
18.2.3 Metrics for Evaluating Class Forecasts 589
18.3 Machine Learning Approaches to Supervised Classification 592
18.3.1 Cross-Validation 592
18.3.2 Naïve Bayes 593
18.3.3 k-Nearest Neighbors 595
18.3.4 Learning Vector Quantization 597
18.3.5 Classification Trees 599
18.3.6 Bagging, Random Forests, and Extremely Randomized Trees 601
18.3.7 Boosting 604
18.4 Markov-Switching Models 609
18.5 Application 612
18.6 Conclusion 620
References 620
19 Robust Methods for High-Dimensional Regression and Covariance Matrix Estimation 624
19.1 Introduction 624
19.2 Robust Statistics Tools 625
19.2.1 Huber Contamination Models 625
19.2.2 Influence Function and M-Estimators 627
19.3 Robust Regression in High Dimensions 629
19.3.1 A Class Robust M-Estimators for Generalized Linear Models 629
19.3.2 Oracle Estimators and Robustness 630
19.3.3 Penalized M-Estimator 630
19.3.4 Computational Aspects 633
Fisher Scoring Coordinate Descent 633
Tuning Parameter Selection 634
19.3.5 Robustness Properties 635
Finite Sample Bias 635
Influence Function 637
19.4 Robust Estimation of High-Dimensional Covariance Matrices 639
19.4.1 Sparse Covariance Matrix Estimation 639
19.4.2 The Challenge of Heavy Tails 641
19.4.3 Revisting Tools from Robust Statistics 642
19.4.4 On the Robustness Properties of the Pilot Estimators 644
19.5 Further Extensions 645
19.5.1 Generalized Additive Models 645
19.5.2 Sure Independence Screening 646
19.5.3 Precision Matrix Estimation 647
19.5.4 Factor Models and High-Frequency Data 648
19.6 Conclusion 649
References 649
20 Frequency Domain 653
20.1 Introduction 653
20.2 Background 654
20.3 Granger Causality 658
20.4 Wavelet 665
20.4.1 Wavelet Forecasting 670
20.5 ZVAR and Generalised Shift Operator 676
20.5.1 Generalised Shift Operator 677
20.5.2 ZVAR Model 679
20.5.3 Monte Carlo Evidence 681
20.6 Conclusion 683
References 684
21 Hierarchical Forecasting 686
21.1 Introduction 686
21.2 Hierarchical Time Series 687
21.3 Point Forecasting 690
21.3.1 Single-Level Approaches 691
Bottom-Up 691
Top-Down 691
Middle-Out 692
21.3.2 Point Forecast Reconciliation 692
Optimal MinT Reconciliation 693
21.4 Hierarchical Probabilistic Forecasting 695
21.4.1 Probabilistic Forecast Reconciliation in the Gaussian Framework 696
21.4.2 Probabilistic Forecast Reconciliation in the Non-parametric Framework 696
21.5 Australian GDP 697
21.5.1 Income Approach 698
21.5.2 Expenditure Approach 698
21.6 Empirical Application Methodology 700
21.6.1 Models 700
21.6.2 Evaluation 703
21.7 Results 704
21.7.1 Base Forecasts 704
21.7.2 Point Forecast Reconciliation 706
21.7.3 Probabilistic Forecast Reconciliation 706
21.8 Conclusions 710
Appendix 711
References 714

Erscheint lt. Verlag 28.11.2019
Reihe/Serie Advanced Studies in Theoretical and Applied Econometrics
Advanced Studies in Theoretical and Applied Econometrics
Zusatzinfo XIII, 719 p. 80 illus., 62 illus. in color.
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Datenbanken
Wirtschaft Allgemeines / Lexika
Wirtschaft Volkswirtschaftslehre
Schlagworte Aggregation • Averaging • Big Data • Cointegration • dimension reduction • dynamic factor models • Estimation of common factors • Feature screening • Forecasts • Macroeconomic Forecasting • Mixed frequency data sampling regressions • Model forecast combination • Penalized regression • Shrinkage • Subspace Methods • Time varying parameters • Unit Roots • Variable selection • Vector autoregressions
ISBN-10 3-030-31150-3 / 3030311503
ISBN-13 978-3-030-31150-6 / 9783030311506
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 11,8 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Grundkurs für Ausbildung und Praxis

von Ralf Adams

eBook Download (2023)
Carl Hanser Verlag GmbH & Co. KG
CHF 29,30
Das umfassende Handbuch

von Wolfram Langer

eBook Download (2023)
Rheinwerk Computing (Verlag)
CHF 34,10
Das umfassende Lehrbuch

von Michael Kofler

eBook Download (2024)
Rheinwerk Computing (Verlag)
CHF 34,10