好的学习资料网址

A Beginner’s Guide to Variational Methods: Mean-Field Approximation

http://www.math.uah.edu/stat/prob/index.html,犹他州立大学概率论和随机过程,graduate level,和这个相对简单的课程结合起来不错:https://www.statlect.com/fundamentals-of-probability/conditional-expectation

https://www.codecademy.com/learn, 各种编程

RNN和LSTM的资源清单:http://handong1587.github.io/deep_learning/2015/10/09/rnn-and-lstm.html, https://medium.com/@aidangomez/let-s-do-this-f9b699de31d9#.j9mpdvhhh,Backpropogating an LSTM: A Numerical Example,直接举了个可以手算的例子,不错

DL的资料:http://cs231n.github.io/assignments2016/assignment3/

http://www.offconvex.org/2016/03/24/saddles-again/,讲非凸优化的,saddle point的http://www.argmin.net/2016/04/11/flatness/讲非凸优化的,saddle point的

https://pymanopt.github.io/,流型优化的

http://videolectures.net/course_information_theory_pattern_recognition/, Course on Information Theory, Pattern Recognition, and Neural Networks,著名的机器学习课

https://github.com/yasoob/intermediatePython/blob/master/python_c_extension.rst, 适合我现在的python的资料

关于集合和序列的{a_n}的lim sup和lim inf的最清楚的解释释,http://math.stackexchange.com/questions/4705/limit-inferior-and-superior-for-sets-vs-real-numbers

关于6.convergence里的Examples and Applications 例1,最清楚的解释见http://math.stackexchange.com/questions/1386695/infinite-heads-from-infinite-coin-tosses, http://stats.stackexchange.com/questions/164960/how-to-prove-that-an-event-occurs-infinitely-often-almost-surely

Why Deep Learning Works II: the Renormalization Group:https://charlesmartin14.wordpress.com/2015/04/01/why-deep-learning-works-ii-the-renormalization-group/

深度学习的一个好的演示图片,ppt用:http://shixialiu.com/publications/cnnvis/demo/

hinton的DL和PRML的课件:https://class.coursera.org/neuralnets-2012-001,这是hinton的课经典。 https://github.com/thejakeyboy/umich-eecs545-lectures, https://www.cs.colorado.edu/~mozer/Teaching/syllabi/DeepLearning2015/, http://joanbruna.github.io/stat212b/,https://github.com/joanbruna/stat212b,这俩是16年比较新的课,涉及最新的深度学习研究,http://machinelearningmastery.com/deep-learning-courses/,多数的深度学习课程网址都在里边,变分神经网络:Neural Variational Inference: Variational Autoencoders and Helmholtz machines-http://barmaley-exe.github.io/posts/2016-07-11-neural-variational-inference-variational-autoencoders-and-Helmholtz-machines.html, 一篇很好的阐述VAE的文章,生成模型里面VAE是必知必会的https://jaan.io/unreasonable-confusion/

metric learning:https://github.com/all-umass/metric-learn

强化学习:https://github.com/Mononofu/reinforcement-learning, Lenny #1: Robots + Reinforcement Learning,CS 294: Deep Reinforcement Learning, Fall 2015,伯克利的强化学习课,这个博客Deep Reinforcement Learning: A Tutorial,https://gym.openai.com/docs/rl也不错, Deep Reinforcement Learning: Pong from Pixels-http://karpathy.github.io/2016/05/31/rl/, Deep Reinforcement Learning in TensorFlow-https://github.com/carpedm20/deep-rl-tensorflow, https://www.quora.com/What-is-a-good-MOOC-on-reinforcement-learning, basic_reinforcement_learning:https://github.com/vmayoral/basic_reinforcement_learning, Extending the OpenAI Gym for robotics-http://blog.deeprobotics.es/robots,/simulation,/ai,/rl,/reinforcement/learning/2016/08/19/openai-gym-for-robotics/, https://github.com/ShangtongZhang/reinforcement-learning-an-introduction,好教材还有代码

强化学习仿真和评估相关算法:https://github.com/rllab/rllab,rllab is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of the following algorithms,            https://github.com/openai/gym,https://gym.openai.com/, https://openai.com/blog/openai-gym-beta/,reinforcement learning  survay很多资料 https://github.com/andrewliao11/Deep-Reinforcement-Learning-Survey

arxiv最新论文:http://www.arxiv-sanity.com/top

Modern Pandas (Part 1):http://tomaugspurger.github.io/modern-1.html

开集open set解释:http://mathoverflow.net/questions/19152/why-is-a-topology-made-up-of-open-sets/30231#30231

张量理解tensor:https://www.quora.com/What-is-a-good-way-to-understand-tensors

kaggle比赛的教程:Kaggle Python Tutorial on Machine Learning, http://blog.kaggle.com/2016/04/25/free-kaggle-machine-learning-tutorial-for-python/

TensorFlow Data Inputs (Part 1): Placeholders, Protobufs & Queues:https://indico.io/blog/tensorflow-data-inputs-part1-placeholders-protobufs-queues/

Data Visualization,数据可视化的华盛顿大学的课程:http://courses.cs.washington.edu/courses/cse512/16sp/

贝叶斯统计学,A Guide to Bayesian Statistics:https://www.countbayesie.com/blog/2016/5/1/a-guide-to-bayesian-statistics,https://github.com/bikestra/bdapy-Bayesian Data Analysis (Gelman et al, 3rd Edition)书的代码,书已经下载

matplot:Matplotlib tutorial: Plotting tweets mentioning Trump, Clinton & Sanders, https://www.dataquest.io/blog/matplotlib-tutorial/

Introduction to Pandas:http://nbviewer.jupyter.org/github/fonnesbeck/Bios8366/blob/master/notebooks/Section2_5-Introduction-to-Pandas.ipynb

Python 读写excel文件:https://segmentfault.com/a/1190000005144821

python编程的教程:http://bafflednerd.com/learn-python-online/, http://www.liaoxuefeng.com/wiki/001374738125095c955c1e6d8bb493182103fac9270762a000/001386820062641f3bcc60a4b164f8d91df476445697b9e000,廖雪峰的python网址看了一个还不错

关于python的面试题:https://github.com/taizilongxu/interview_python#3-%E6%AD%BB%E9%94%81

Topological Data Analysis:https://gist.github.com/calstad/01e174faff2cdca7faf9

Copy model from theano to tensorflow:https://medium.com/@sentimentron/faceoff-theano-vs-tensorflow-e25648c31800#.nnt4z985z

量化交易:A Survey of Deep Learning Techniques Applied to Trading,http://gregharris.info/a-survey-of-deep-learning-techniques-applied-to-trading/

Deep learning treads:http://www.computervisionblog.com/2016/06/deep-learning-trends-iclr-2016.html

贝叶斯学习(Materials for CSE 515T: Bayesian Methods in Machine Learning-https://github.com/rmgarnett/cse515t)和变分推断的课程,以及高级机器学习和bishop的书比较搭的课-http://www.cs.toronto.edu/~hinton/csc2535/,http://www.cs.toronto.edu/~rsalakhu/STA4273_2015/assignments.html:

设计模式:A collection of design patterns and idioms in Python-https://github.com/faif/python-patterns, http://www.pysnap.com/design-patterns-explained/

量化交易:https://xueqiu.com/4105947155/65184373

A Beginner’s Guide to Variational Methods: Mean-Field Approximation:http://blog.evjang.com/2016/08/variational-bayes.html

The Ultimate List of 300+ Computer Vision Resources:https://hackerlists.com/computer-vision-resources/

The Ultimate List of TensorFlow Resources: Books, Tutorials, Libraries and More:https://hackerlists.com/tensorflow-resources/

differential programming vs probilistic programming:https://pseudoprofound.wordpress.com/2016/08/03/differentiable-programming/

online learning, 在线学习课程:http://courses.cs.washington.edu/courses/cse599s/14sp/index.html

python+cuda编程:https://github.com/src-d/kmcuda, https://github.com/deeplearningais/ndarray/tree/856812e63ea88532ccb04acce6e93024cdca1ed7

compiler编译器最好的入门讲解,博客:https://ruslanspivak.com/lsbasi-part4/

不懂的地方:

1.probility space :Events and Random Variables

Suppose that the sampling is without replacement (the most common case). If we record the ordered sample X=(X1,X2,,Xn)X=(X1,X2,…,Xn), then the unordered sample W={X1,X2,,Xn}W={X1,X2,…,Xn} is a random variable (that is, a function of XX). On the other hand, if we just record the unordered sample WW in the first place, then we cannot recover the ordered sample. Note also that the number of ordered samples of size nn is simply n!n! times the number of unordered samples of size nn. No such simple relationship exists when the sampling is with replacement. This will turn out to be an important point when we study probability models based on random samples, in the next section.

1.probility space :Convergence

The following events are the same:

  1. XnXn does not converge to XX as nn→∞.
  2. For some ϵ>0ϵ>0, |XnX|>ϵ|Xn−X|>ϵ for infinitely many nN+n∈N+, 为什么这俩个等价? infinitely many都能够写成lim suphence

Measure Spaces

if P(X=Y)=1P(X=Y)=1 and P(Y=Z)=1P(Y=Z)=1 then P(X=Z)=1P(X=Z)=1.例子23,这个可以根据  ({X=Y}∪{X!=Y}) = S

, 24还需要详细推导!!!30的证明But BTB∈T and ABT没看明白,这个是因为AB等价于空集,同理, ABT也一样,因为P(AB) + P(B∖A) =

P(AB)得出 AB等价于空集

31But also, BiBjNiNjBi∩Bj⊆Ni∪Nj这个是因为{Ai}是pairwise disjoint sets,这就容易证明了和Bonferroni’s

inequality这个应该是inclusion-exclusion formula 

48证明?参见http://math.stackexchange.com/questions/1303735/questions-on-kolmogorov-zero-one-law-proof-in-williams

8. Existence and Uniqueness,例子12是第0章的12. Special Set Structures里的定理10,Suppose that SS is a semi-algebra of subsets of SS. Then the collection SS∗ of finite, disjoint unions of sets in SS is an algebra.

6. Distribution and Quantile Functions

2. . The intervals (,x_n] are increasing in n and have union (,x),另外一个(,x], 是注意右连续和左不连续的区别!!

9. General Distribution Functions

Distribution Functions on [0,),这里的积分是Lebesgue-Stieltjes积分,如https://www.statlect.com/fundamentals-of-probability/expected-value所示?

还有

2 If FF is a distribution function on RR, then there exists a unique measure μμ on RR that satisfies这个定理的证明和http://www.math.uah.edu/stat/prob/Existence.html#ext测度延扩的唯一性一起看

这本书的各种积分的区别联系讲的不清楚,需要自己归纳!!!参考这个:http://math.stackexchange.com/questions/380785/what-does-it-mean-to-integrate-with-respect-to-the-distribution-function!

第二章11节:Properties of integrals unun is increasing in nn, vnvn is decreasing in nn, and unfun→f andvnfvn→f as n这个看不懂!应该是极限存在的条件即是lim sup = lim inf

PY(a,b]=P(a<Yb)=FY(b)FY(a);a,bR,a<b,这个Y= r(X),分布函数怎么来的?

12. Absolute Continuity and Density Functions

第13里证明存在密度函数,Let f=i1_A_f_i这里有错,应该是f=i f_i, 没有1_Ai

 问作者一个问题积分的dx是相对于du吗?

kernel funciton, The operator fKff↦Kf defined for measurable functions f:SR,这里typo,应该是:f:T- R

stochastic process: But P(tXt is discontinuous)=P(tYt is continuous)=P(Ω)=1

Finite-dimensional distributions. Kolmogorov’s Theorem,这个定理的证明看这里比较好:http://129.81.170.14/~wentzell/Sec22.pdf,已经下载,搜索名字即可

Kolmogorov’s Theorem证明的consistent condition第一个有问题吧?应该是P_tpi(pi C) = P_t(C)

https://www.quora.com/How-does-one-draw-the-intuitive-explanation-of-a-Borel-sigma-algebra-stochastic-filtration-that-it-is-equivalent-to-the-amount-of-information-known-till-time-t,这里有个解释stochastic process的sample space和filtration不错!http://blog.tombowles.me.uk/2013/11/04/sigma-algebras-and-filtrations-in-probability-theory-part-1/#fn2,这个解释了为什么要定义sigma algebra,为什么要可测,为什么要filtration!!也很不错

So if XX is right continuous, then XX is progressively measurable with respect to any filtration to which XX is adapted.这个如何证明?

24 stopping time证明:Suppose first that τ is a stopping time relative to F这里应该是是typo,应该是\{\tau \lt t\} \in \mathscr{F}_t

Advertisements

发表评论

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / 更改 )

Twitter picture

You are commenting using your Twitter account. Log Out / 更改 )

Facebook photo

You are commenting using your Facebook account. Log Out / 更改 )

Google+ photo

You are commenting using your Google+ account. Log Out / 更改 )

Connecting to %s