show episodes
 
Artwork

1
Coding Blocks

Allen Underwood, Michael Outlaw, Joseph Zack

Unsubscribe
Unsubscribe
Monthly+
 
The world of computer programming is vast in scope. There are literally thousands of topics to cover and no one person could ever reach them all. One of the goals of the Coding Blocks podcast is to introduce a number of these topics to the audience so they can learn during their commute or while cutting the grass. We will cover topics such as best programming practices, design patterns, coding for performance, object oriented coding, database design and implementation, tips, tricks and a who ...
  continue reading
 
Artwork

1
Smalltalk Reflections

David Buck and Craig Latta

Unsubscribe
Unsubscribe
Monthly
 
The Smalltalk programming language is not only the first pure object oriented language, but has been the birthplace for many of today's best practices in software development including agile methods, design patterns, unit testing, refactoring and incremental development. In the Smalltalk Reflections podcast, David Buck and Craig Latta guide you through the world of Smalltalk covering topics from basic object oriented principles to software testing, version control and deployment techniques. ...
  continue reading
 
Artwork

1
The Programmer Toolbox

Kathryn Hodge, Robyn Silber

Unsubscribe
Unsubscribe
Monthly
 
A podcast hosted by Kathryn Hodge and Robyn Silber that aims to introduce others to coding, demystify core programming concepts, provide tips for success, and start more conversations around important topics in technology.
  continue reading
 
Welcome to "The AI Chronicles", the podcast that takes you on a journey into the fascinating world of Artificial Intelligence (AI), AGI, GPT-5, GPT-4, Deep Learning, and Machine Learning. In this era of rapid technological advancement, AI has emerged as a transformative force, revolutionizing industries and shaping the way we interact with technology. I'm your host, GPT-5, and I invite you to join me as we delve into the cutting-edge developments, breakthroughs, and ethical implications of A ...
  continue reading
 
As it concerns the racial history of our country, are the objects in the mirror closer than they appear or not? Objects In The Mirror podcast asks this question as listeners hear firsthand accounts of those who lived during the segregation and early desegregation eras.
  continue reading
 
24 Minutes of UX is a podcast of the people, by the people, for the people. The format is a combination of an after-work discussion, a 1-on-1 chat, a global community platform, and a coaching session. Episodes typically feature a Seeker and a Giver. The Seeker is a person seeking advice on a specific topic within the domain of UX. The Giver is here for sharing experiences about “what I wish I had known when I started with this topic”. Both the Seeker and the Giver can be senior or junior - w ...
  continue reading
 
Loading …
show series
 
Scala, short for "scalable language," is a powerful programming language that merges the best features of both object-oriented and functional programming paradigms. Designed to be concise, elegant, and expressive, Scala offers a robust framework for developers to build scalable and maintainable software solutions. Combining Paradigms One of Scala's…
  continue reading
 
RL4J is a powerful open-source library designed for reinforcement learning (RL) applications within the Java ecosystem. Developed as part of the Deeplearning4j project, RL4J aims to provide developers and researchers with robust tools to implement and experiment with various reinforcement learning algorithms. As machine learning continues to expand…
  continue reading
 
Arbiter is an advanced tool designed to enhance the process of optimization and hyperparameter tuning in machine learning models. As machine learning continues to evolve, the importance of fine-tuning model parameters to achieve optimal performance has become increasingly critical. Key Features of Arbiter Automated Hyperparameter Tuning: Arbiter au…
  continue reading
 
Hypothesis testing is a fundamental method in statistics used to make inferences about a population based on sample data. It provides a structured approach to evaluate whether observed data deviates significantly from what is expected under a specific hypothesis. Three commonly used hypothesis tests are the Z-test, T-test, and ANOVA, each serving d…
  continue reading
 
Non-parametric tests are a class of statistical methods that do not rely on assumptions about the underlying distribution of data. Unlike parametric tests, which assume a specific distribution for the data, non-parametric tests are more flexible and can be applied to a wider range of data types. This makes them particularly useful in situations whe…
  continue reading
 
The General Linear Model (GLM) is a foundational framework in statistical analysis, widely used for modeling and understanding relationships between variables. It offers a flexible and comprehensive approach for analyzing data by encompassing various types of linear relationships and can be applied across numerous fields including economics, social…
  continue reading
 
Statistical models are powerful tools that allow us to understand, describe, and predict patterns in data. These models provide a structured way to capture the underlying relationships between variables, enabling us to make informed decisions, test hypotheses, and generate predictions about future outcomes. Whether in science, economics, medicine, …
  continue reading
 
Point and interval estimation are key concepts in statistics that provide methods for estimating population parameters based on sample data. These techniques are fundamental to making informed decisions and predictions in various fields, from science and engineering to economics and public policy. By offering both specific values and ranges of plau…
  continue reading
 
P-values and confidence intervals are fundamental concepts in statistical analysis, providing critical insights into the reliability and significance of data findings. These tools help researchers, scientists, and analysts make informed decisions based on sample data, enabling them to draw conclusions about broader populations with a known level of…
  continue reading
 
ImageNet is a large-scale visual database designed for use in visual object recognition research, and it has played a pivotal role in advancing the field of computer vision and deep learning. Launched in 2009 by researchers at Princeton and Stanford, ImageNet consists of millions of labeled images categorized into thousands of object classes, makin…
  continue reading
 
Bayesian inference is a powerful statistical method that provides a framework for updating our beliefs in light of new evidence. Rooted in Bayes' theorem, this approach allows us to combine prior knowledge with new data to form updated, or posterior, distributions, which offer a more nuanced and flexible understanding of the parameters we are study…
  continue reading
 
Statistical inference is a critical branch of statistics that involves making predictions, estimates, or decisions about a population based on a sample of data. It serves as the bridge between raw data and meaningful insights, allowing researchers, analysts, and decision-makers to draw conclusions that extend beyond the immediate data at hand. Core…
  continue reading
 
Sampling techniques are crucial methods used in statistics to select a subset of individuals or observations from a larger population. These techniques allow researchers to gather data efficiently while ensuring that the sample accurately reflects the characteristics of the entire population. Among the most widely used sampling methods are random s…
  continue reading
 
Sampling distributions are a fundamental concept in statistics that play a crucial role in understanding how sample data relates to the broader population. When we collect data from a sample, we often want to make inferences about the entire population from which the sample was drawn. However, individual samples can vary, leading to differences bet…
  continue reading
 
The Central Limit Theorem (CLT) is one of the most important and foundational concepts in statistics. It provides a crucial link between probability theory and statistical inference, enabling statisticians and researchers to draw reliable conclusions about a population based on sample data. The CLT states that, under certain conditions, the distrib…
  continue reading
 
Sampling and distributions are fundamental concepts in statistics that play a crucial role in analyzing and understanding data. They form the backbone of statistical inference, enabling researchers to draw conclusions about a population based on a smaller, manageable subset of data. By understanding how samples relate to distributions, statistician…
  continue reading
 
Kernel Density Estimation (KDE) is a non-parametric method used in statistics to estimate the probability density function of a random variable. Unlike traditional methods that rely on predefined distributions, KDE provides a flexible way to model the underlying distribution of data without making strong assumptions. This makes KDE a versatile and …
  continue reading
 
Distribution-free tests, also known as non-parametric tests, are statistical methods used for hypothesis testing that do not rely on any assumptions about the underlying distribution of the data. Unlike parametric tests, which assume that data follows a specific distribution (such as the normal distribution), distribution-free tests offer a more fl…
  continue reading
 
Non-parametric statistics is a branch of statistics that offers powerful tools for analyzing data without the need for making assumptions about the underlying distribution of the data. Unlike parametric methods, which require the data to follow a specific distribution (such as the normal distribution), non-parametric methods are more flexible and c…
  continue reading
 
Factor Analysis (FA) is a statistical method used to identify underlying relationships between observed variables. By reducing a large set of variables into a smaller number of factors, FA helps to simplify data, uncover hidden patterns, and reveal the underlying structure of complex datasets. This technique is widely employed in fields such as psy…
  continue reading
 
Probability distributions are essential concepts in statistics and probability theory, providing a way to describe how probabilities are spread across different outcomes of a random event. They are the foundation for analyzing and interpreting data in various fields, enabling us to understand the likelihood of different outcomes, assess risks, and …
  continue reading
 
Probability distributions are fundamental concepts in statistics and probability theory that describe how the probabilities of different possible outcomes are distributed across a range of values. By providing a mathematical description of the likelihood of various outcomes, probability distributions serve as the backbone for understanding and anal…
  continue reading
 
Probability spaces form the fundamental framework within which probability theory operates. They provide a structured way to describe and analyze random events, offering a mathematical foundation for understanding uncertainty, risk, and randomness. By defining a space where all possible outcomes of an experiment or random process are considered, pr…
  continue reading
 
Multivariate statistics is a branch of statistics that deals with the simultaneous observation and analysis of more than one statistical outcome variable. Unlike univariate or bivariate analysis, which focus on one or two variables at a time, multivariate statistics considers the interrelationships between multiple variables, providing a more compr…
  continue reading
 
Graph Recurrent Networks (GRNs) are an advanced type of neural network that combines the capabilities of recurrent neural networks (RNNs) with graph neural networks (GNNs) to model data that is both sequential and structured as graphs. GRNs are particularly powerful in scenarios where the data not only changes over time but is also interrelated in …
  continue reading
 
Ruby is a dynamic, open-source programming language known for its simplicity, elegance, and productivity. Created by Yukihiro "Matz" Matsumoto in the mid-1990s, Ruby was designed with the principle of making programming both enjoyable and efficient. The language’s intuitive syntax and flexibility make it a favorite among developers, especially for …
  continue reading
 
Vue.js is an open-source JavaScript framework used for building user interfaces and single-page applications. Created by Evan You in 2014, Vue.js has quickly gained popularity among developers for its simplicity, flexibility, and powerful features. It is designed to be incrementally adoptable, meaning that it can be used for everything from enhanci…
  continue reading
 
ReactJS is a popular open-source JavaScript library used for building user interfaces, particularly single-page applications where a seamless user experience is key. Developed and maintained by Facebook, ReactJS has become a cornerstone of modern web development, enabling developers to create complex, interactive, and high-performance user interfac…
  continue reading
 
Apache Spark is an open-source, distributed computing system designed for fast and flexible large-scale data processing. Originally developed at UC Berkeley’s AMPLab, Spark has become one of the most popular big data frameworks, known for its ability to process vast amounts of data quickly and efficiently. Spark provides a unified analytics engine …
  continue reading
 
Clojure is a modern, dynamic, and functional programming language that runs on the Java Virtual Machine (JVM). Created by Rich Hickey in 2007, Clojure is designed to be simple, expressive, and highly efficient for concurrent programming. It combines the powerful features of Lisp, a long-standing family of programming languages known for its flexibi…
  continue reading
 
Caffe is an open-source deep learning framework developed by the Berkeley Vision and Learning Center (BVLC) and contributed to by a global community of researchers and engineers. Designed with an emphasis on speed, modularity, and ease of use, Caffe is particularly well-suited for developing and deploying deep learning models, especially in the fie…
  continue reading
 
Nimfa is a Python library specifically designed for performing Non-negative Matrix Factorization (NMF), a powerful technique used in data analysis to uncover hidden structures and patterns in non-negative data. Developed to be both flexible and easy to use, Nimfa provides a comprehensive set of tools for implementing various NMF algorithms, making …
  continue reading
 
FastAPI is a modern, open-source web framework for building APIs with Python. Created by Sebastián Ramírez, FastAPI is designed to provide high performance, easy-to-use features, and robust documentation. It leverages Python's type hints to offer automatic data validation and serialization, making it an excellent choice for developing RESTful APIs …
  continue reading
 
NetBeans is a powerful, open-source integrated development environment (IDE) used by developers to create applications in various programming languages. Initially developed by Sun Microsystems and now maintained by the Apache Software Foundation, NetBeans provides a robust platform for building desktop, web, and mobile applications. It supports a w…
  continue reading
 
The Area Under the Curve (AUC) is a widely used metric in the evaluation of binary classification models. It provides a single scalar value that summarizes the performance of a classifier across all possible threshold values, offering a clear and intuitive measure of how well the model distinguishes between positive and negative classes. The AUC is…
  continue reading
 
Non-Negative Matrix Factorization (NMF) is a powerful technique in the field of data analysis and machine learning used to reduce the dimensionality of data and uncover hidden patterns. Unlike other matrix factorization methods, NMF imposes the constraint that the matrix elements must be non-negative. This constraint makes NMF particularly useful f…
  continue reading
 
Grab your headphones because it's water cooler time! In this episode we're catching up on feedback, putting our skills to the test, and wondering what we're missing. Plus, Allen's telling it how it is, Outlaw is putting it all together and Joe is minding the gaps! View the full show notes here: https://www.codingblocks.net/episode240 Reviews Thank …
  continue reading
 
Signal Detection Theory (SDT) is a framework used to analyze and understand decision-making processes in situations where there is uncertainty. Originating in the fields of radar and telecommunications during World War II, SDT has since been applied across various domains, including psychology, neuroscience, medical diagnostics, and market research…
  continue reading
 
AngularJS is a powerful JavaScript-based open-source front-end web framework developed by Google. Introduced in 2010, AngularJS was designed to simplify the development and testing of single-page applications (SPAs) by providing a robust framework for client-side model–view–controller (MVC) architecture. It has significantly transformed the way dev…
  continue reading
 
The Expectation-Maximization (EM) algorithm is a widely-used statistical technique for finding maximum likelihood estimates in the presence of latent variables. Developed by Arthur Dempster, Nan Laird, and Donald Rubin in 1977, the EM algorithm provides an iterative method to handle incomplete data or missing values, making it a cornerstone in fiel…
  continue reading
 
Kotlin is a contemporary programming language developed by JetBrains, designed to be fully interoperable with Java while offering a more concise and expressive syntax. Introduced in 2011 and officially released in 2016, Kotlin has rapidly gained popularity among developers for its modern features, safety enhancements, and seamless integration with …
  continue reading
 
Dynamic Topic Models (DTM) are an advanced extension of topic modeling techniques designed to analyze and understand how topics in a collection of documents evolve over time. Developed to address the limitations of static topic models like Latent Dirichlet Allocation (LDA), DTMs allow researchers and analysts to track the progression and transforma…
  continue reading
 
The False Positive Rate (FPR) is a crucial metric used to evaluate the performance of binary classification models. It measures the proportion of negative instances that are incorrectly classified as positive by the model. Understanding FPR is essential for assessing how well a model distinguishes between classes, particularly in applications where…
  continue reading
 
Loading …

Quick Reference Guide