forked from PaddlePaddle/Paddle
-
Notifications
You must be signed in to change notification settings - Fork 1
Paddle 2.0 beta Upgraded API List
XiaoguangHu edited this page Sep 9, 2020
·
1 revision
Paddle 1.8 API | Paddle 2.0-beta API | API升级说明 | API升级PR |
---|---|---|---|
paddle.fluid.layers.zeros(shape, dtype, force_cpu=False) | paddle. zeros ( shape, dtype=None,name=None) | 去除device和out参数, input改为x | #25860 |
paddle.fluid.layers.zeros_like(x, out=None) | paddle. zeros_like ( x, dtype=None, name=None ) | 新增,去除alpha参数,增加完整的broadcast功能,行为与numpy.matmul完全一致。 | #26411 |
paddle.fluid.layers.ones(shape, dtype, force_cpu=False) | paddle. ones ( shape, dtype=None,name=None ) | 去除device和out参数, input改为x | #25497 |
paddle.fluid.layers.ones_like(x, out=None) | paddle. ones_like ( x, dtype=None, name=None ) | 新增,用于创建和x维度相同的全1的Tensor | #25663 |
paddle.fluid.layers.arange(start, end, step=1, dtype=None, name=None) | paddle.arange(start=0,stop,step=1,dtype=None,,name=None) | 1. API名称从range修改为arange,用于创建从start到end的步长为step的Tensor 2. 增加默认参数:start=0, stop=None,step=1,dtype=None;并完善对应的默认行为 |
#25452 |
paddle.fluid.layers.linspace ( start, stop, num, dtype, out=None, device=None, name=None ) | paddle.linspace ( start, stop, num, dtype=None, name=None ) | 去除out和device参数 | #25257 |
paddle.fluid.layers.eye(num_rows, num_columns=None, out=None, dtype='float32', stop_gradient=True, name=None) | paddle.eye(num_rows, num_columns=None, dtype=None, name=None) | 去除out和stop_gradient参数 | #25257 |
paddle.fluid.layers.full(shape, fill_value, out=None, dtype=None, device=None, stop_gradient=True, name=None) | paddle.full(shape, fill_value, dtype=None, name=None) | 去除out,device以及stop_gradient参数 | #25257 |
paddle.fluid.layers.full_like(input, fill_value, out=None, dtype=None, device=None, stop_gradient=True, name=None) | paddle.full_like(x, fill_value, dtype=None, name=None) | 去除out,device以及stop_gradient参数 | #25294 |
paddle.fluid.layers.concat(input, axis=0, name=None) | paddle.concat(x, axis=0, name=None) | 输入参数从input->x | #25307 |
paddle.fluid.layers.gather(input, index, overwrite=True) | paddle.gather(x, index, axis=None, name=None) paddle.gather_nd(x, index, name=None) |
gather OP 修改输入参数input->x ,添加axis参数,指定操作的维度。 gather_nd OP修改输入参数input->x | #26455 |
paddle.fluid.layers.index_select(input, index, dim=0) | paddle.index_select(x, axis=0, index, name=None) | 输入参数从input->x, dim->axis | #25257 |
paddle.fluid.layers.reshape(x, shape, actual_shape=None, act=None, inplace=False, name=None) | paddle.reshape(x, shape, name=None) | 删除了3个参数:actual_shape, act, inplace | #26338 |
paddle.fluid.layers.split(input, num_or_sections, dim=-1, name=None) | paddle.split(x, num_or_sections, axis=0, name=None) | 输入参数从input->x, dim->axis | #25257 |
paddle.fluid.layers.squeeze(input, axes, name=None) | paddle.squeeze(x, axis=None, name=None) | 1. 删除out参数,参数改名 input->x, axes->axis 2. axis支持int,list,tuple和None 3. 对非1的dim进行squeeze时保持不变 |
#25281 |
paddle.fluid.layers.stack(x, axis=0) | paddle.stack(x, axis=0,name=None) | 1. 输入不再支持单个Tensor,只支持List[Tensor]和tuple[Tensor] 2. 增加name参数 |
#25305 |
paddle.fluid.layers.unsqueeze(input, axes, name=None) | paddle.unsqueeze(x, axis, name=None) | 参数改名 input->x, axes->axis | #25470 |
paddle.fluid.layers.uniform_random(shape, dtype='float32', min=-1.0, max=1.0, seed=0) | paddle.rand(shape, ?dtype=None, name=None) | 新增,用于创建[0, 1)范围内的均匀分布的随机Tensor | #25246 |
paddle.fluid.layers.randint(low, high=None, shape=None, out=None, dtype=None, device=None, stop_gradient=False, seed=0, name=None) | paddle.randint(low=0, high=None, shape=None, ?dtype=None, name=None) | 新增,用于创建均匀分布的整数类型的Tensor | #25433 |
paddle.fluid.layers.randn(shape, out=None, dtype=None, device=None, stop_gradient=True, name=None) | paddle.randn(shape,dtype=None,name=None) | 新增,用于创建标准正态分布的Tensor | #25409 |
paddle.fluid.layers.randperm(n, out=None, dtype="int64", device=None, stop_gradient=True, seed=0) | paddle.randperm(n,dtype="int64", name=None) | 新增,用于创建[0, n)随机排列的Tensor | #25410 |
paddle.fluid.dygraph.no_grad(func=None) | paddle.no_grad() | decorator模式需实例化使用,增加generator函数支持 | #25472 |
paddle.fluid.layers.abs(x, name=None) | paddle.abs(x, name=None) | 更新文档 | #25942 |
paddle.fluid.layers. acos ( x, name=None ) | paddle.acos(x, name=None) | 更新文档 | #25958 |
paddle.fluid.layers.elementwise_add(x, y, axis=-1, act=None, name=None) | paddle.add(x, y, name=None) | 删除alpha参数 | #25910 |
paddle.fluid.layers. asin ( x, name=None ) | paddle.asin(x, name=None) | 更新文档 | #25967 |
paddle.fluid.layers.atan(x, name=None) | paddle.atan(x, name=None) | 更新文档 | #25968 |
paddle.fluid.layers. ceil ( x, name=None ) | paddle.ceil(x, name=None) | 更新文档 | |
paddle.fluid.layers.clamp(input, min=None, max=None, output=None, name=None) | paddle.clip(x, min=None, max=None, name=None) | clamp改名clip,max/min支持int,max/min可以都为None,修复max/min为None时溢出bug | #25906 |
paddle.fluid.layers. cos ( x, name=None ) | paddle.cos(x, name=None) | 更新文档 | #25969 |
paddle.fluid.layers.erf(x) | paddle.erf(x, name=None) | 增加name参数 |
#26426 https://github.com/PaddlePaddle/FluidDoc/pull/2403 |
paddle.fluid.layers.exp(x, name=None) | paddle.exp(x, name=None) | 更新文档 | #25258 |
paddle.fluid.layers.floor(x, name=None) | paddle.floor(x, name=None) | 更新文档 | #25292 |
paddle.fluid.layers.log(x, name=None) | paddle.log(x, name=None) | 更新文档 | |
paddle.fluid.layers.log1p(x, out=None, name=None) | paddle.log1p(x,?name=None) | 去掉out参数 |
#25488 https://github.com/PaddlePaddle/FluidDoc/pull/2283 |
paddle.fluid.layers.logical_and(x, y, out=None, name=None) | paddle.logical_and(x, y, name=None) | 1. 不支持broadcast 2. 按目前paddle不支持braodcast进行验证,当两个输入tensor维度不相等时应该报错。 |
#26490 |
paddle.fluid.layers.logical_not(x, out=None, name=None) | paddle.logical_not(x, name=None) | 1. 不支持broadcast 2. 按目前paddle不支持braodcast进行验证,当两个输入tensor维度不相等时应该报错。 |
#26491 |
paddle.fluid.layers.logical_or(x, y, out=None, name=None) | paddle.logical_or(x, y, name=None) | 1. 不支持broadcast 2. 按目前paddle不支持braodcast进行验证,当两个输入tensor维度不相等时应该报错。 |
#26492 |
paddle.fluid.layers.logical_xor(x, y, out=None, name=None) | paddle.logical_xor(x, y,?name=None) | 1. 不支持broadcast 2. 按目前paddle不支持braodcast进行验证,当两个输入tensor维度不相等时应该报错。 |
#26493 |
paddle.fluid.layers.elementwise_mul(x, y, axis=-1, act=None, name=None) | paddle.multiply(x, y,?name=None) | 1. 报错信息应简洁易懂,移动至python 端; 2. paddle暂时不支持type promotion; |
#26494 |
paddle.fluid.layers.elementwise_pow(x, y, axis=-1, act=None, name=None) | paddle.pow(x, y, name=None) | 1. 完成elementwise_pow和pow,api合并 2. 新任务修改power为pow 3. 完成不同类型输入支持如:pow(x:int, y:float)与pow(x:float, y:int) 4. 支持variable和python type(int,float) 5. 支持float32,float64,int64,int32与numpy行为一致 6. 完成comment里面code修复 |
#26495 |
paddle.fluid.layers.reciprocal(x, name=None) | paddle.reciprocal(x, name=None) |
#26496 |
|
paddle.fluid.layers.round(x, name=None) | paddle.round(x, name=None) | 更新文档 |
#26497 |
paddle.fluid.layers.rsqrt(x, name=None) | paddle.rsqrt(x, name=None) | 更新文档 |
#26498 |
paddle.fluid.layers.sigmoid(x, name=None) | paddle.sigmoid(x, name=None) | 新增functional |
#26499 |
paddle.fluid.layers.sign(x) | paddle.sign(x, name=None) | 完善实例代码 |
#26500 |
paddle.fluid.layers.sin(x, name=None) | paddle.sin(x, name=None) | 更新文档 |
#26501 |
paddle.fluid.layers.sqrt(x, name=None) | paddle.sqrt(x, name=None) | 更新文档 |
#26502 |
paddle.fluid.layers.square(x, name=None) | paddle.square(x, name=None) | 更新文档 |
#26503 |
paddle.fluid.layers.tanh(x, name=None) | paddle.tanh(x, name=None) | input改为x,去掉out参数。 |
#26504 |
paddle.argmax(input, axis=None, dtype=None, out=None, keepdims=False, name=None) | paddle.argmax(x, axis=None, dtype=None, keepdim=False, name=None) | 1. 新增name、dtype、keep_dims属性 2. 如果axis is None的时候,将会对tensor进行flatten,然后进行argmax |
#26505 |
paddle.fluid.layers.argmin(x, axis=0) | paddle.argmin(x, axis=None, dtype=None, keepdim=False, name=None) | 1. 新增name、dtype、keep_dims属性 2. 如果axis is None的时候,将会对tensor进行flatten,然后进行argmin |
#26506 |
paddle.fluid.layers.logsumexp(x, dim=None, keepdim=False, out=None, name=None) | paddle.logsumexp(x, axis=None, keepdim=False, name=None) | 新增,用于沿axis计算log(sum(exp(x)))的结果 | |
paddle.fluid.layers.mean(x, name=None) | paddle.mean(x, axis=None, keepdim=False, name=None) | 1. API名称从reduce_mean修改为mean,用于沿axis计算x的均值 2. 参数input修改为x,dim修改为axis,keep_dim修改为keepdim 3. 统一原来的mean和reduce_mean的功能 4. 由于int类型的数据计算错误,移除对于int32、int64的计算支持 |
#26147 |
paddle.tensor.linalg.norm(input, p='fro', axis=None, keepdim=False, out=None, name=None) | paddle.norm(x, p='fro', axis=None, keepdim=False, name=None) | 新增,用于沿axis计算x的范数,支持frobenius、0、1、2、inf、-inf范数以及p大于0的范数 | #26492 |
paddle.fluid.layers.reduce_prod(input, dim=None, keep_dim=False, name=None) | paddle.prod(x, axis=None, keepdim=False, dtype=None, name=None) | 1. 输入input的名称改为x; 2. 参数dim的名称改为axis,保留axis=None; 3. 参数keep_dim的名称改为keepdim; 4. 增加dtype参数 |
#26351 |
paddle.fluid.layers.sum(x) | paddle.sum(x, axis=None, dtype=None, keepdim=False, name=None) | 1.输入input的名称改为x; 2.参数dim的名称改为axis; 3.参数keep_dim的名称改为keepdim; |
#26337 |
paddle.fluid.layers.unique(x, dtype='int32') | paddle.unique(x, axis=None, return_index=False, return_inverse=False, return_counts=False, name=None) | 1. 输入参数增加:return_index,return_inverse,return_counts,axis 2. 输入参数变化:dtype默认从int32变为int64,用于控制3个可选输出的dtype 3. 去除输入只能是1-D Tensor的限制 4. 输出行为变化:默认仅返回1个输出,即输入中的unique元素按升序排序的结果。 5. 输出数量变化:可以根据return_index,return_inverse,return_counts3个bool值,返回indices,inverse,counts3个可选输出,否则仅返回unique1个输出 |
#26537 |
paddle.fluid.layers.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False, name=None) | paddle.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False, name=None) | 1. 输入名称input改为x,other改为y | #26360 |
paddle.fluid.layers. argsort ( input, axis=-1, descending=False, name=None ) | paddle.argsort(x, axis=-1, ?descending=False, name=None) | 1. 输入名称input改成x,增加name属性 2.argsort api不返回排序后的结果,只返回排序的index 3.新增sort api,该api参数和argsort基本一致,sort api只返回排序的结果 | #25514 |
paddle.fluid.layers.elementwise_equal(x, y, name=None) | paddle.equal(x, y, name=None)? | 去掉cond属性,同时新增name属性 | #25448 |
paddle.fluid.layers.elementwise_equal(x, y, name=None) | paddle.equal_all(x, y, name=None) | 新增,用户判断两个tensor的元素值是否完全一样 | #25448 |
paddle.fluid.layers.greater_equal(x, y, cond=None) | paddle.greater_equal(x, y, name=None) | 去掉cond属性,同时新增name属性 | #25448 |
paddle.fluid.layers.greater_than(x, y, cond=None) | paddle.greater_than(x, y, name=None) | 去掉cond属性,同时新增name属性 | #25448 |
paddle.fluid.layers.isfinite(x) | paddle.isfinite(x, name=None) | 新增paddle.tensor.isfinite API。返回一个输入的tensor中每个元素是否既非nan也非inf的boolean tensor。(有别于此前只返回一个boolean值的paddle.fluid.layers.isfinite) | #26344 |
paddle.fluid.layers.less_equal(x, y, cond=None) | paddle.less_equal(x, y, name=None) | 去掉cond属性,同时新增name属性 | #25448 |
paddle.fluid.layers.less_than(x, y, cond=None) | paddle.less_than(x, y, name=None) | 去掉cond属性,同时新增name属性 | #25448 |
paddle.fluid.layers.elementwise_max(x, y, axis=-1, act=None, name=None) | paddle.max(x, axis=None, keepdim=False, name=None) paddle.maximum(x, y, name=None) |
1. paddle min/max对齐numpy的min/max,跟fluid的reduce_min/reduce_max对齐 ? 2. 新增maximum,minimum api对齐numpy的maximum, minimum, 对齐fluid下面的maximum,minimum |
#25580/files |
paddle.fluid.layers.elementwise_min(x, y, axis=-1, act=None, name=None) | paddle.min(x, axis=None, keepdim=False, name=None) paddle.minimum(x, y, name=None) |
1. paddle min/max对齐numpy的min/max,跟fluid的reduce_min/reduce_max对齐 2. 新增maximum,minimum api对齐numpy的maximum, minimum, 对齐fluid下面的maximum,minimum |
#25580/files |
paddle.fluid.layers.not_equal(x, y, cond=None) | paddle.not_equal(x, y, name=None) | 去掉cond属性,同时新增name属性 | #25580 |
paddle.fluid.layers.cross(input, other, dim=None) | paddle.cross(x, y, axis=None, name=None) | 1. 输入名从input,other改为x,y。 2. 参数dim改为axis。 3. 新增参数name。 |
#25354 https://github.com/PaddlePaddle/FluidDoc/pull/2287 |
paddle.fluid.layers.cumsum(x, axis=None, exclusive=None, reverse=None) | paddle.cumsum(x, axis=None, dtype=None, name=None) | 1. 大幅提升在某些输入下的GPU性能,输入具体指某一维度大于1且其他维度均为1的Tensor。例如输入的shape为(N)或(N,1)或(1,1,N),其中N为任意正整数。 2. 新增dtype参数用于指定输出Tensor的数据类型。 3. 新增参数name 4. 当axis=None时,会将输入展开为一维Tensor进行计算。 5. axis参数从原来只支持-1到支持完整的负数索引 |
#25505 |
paddle.fluid.layers.diag(diagonal) | paddle.diag(x, offset=0, padding_value=0, name=None) | 1. 新增对于2维矩阵输入提取对角线元素的功能。 2. 支持设置对角线偏移量,用于提取主对角线、上对角线或下对角线的元素。 3. 支持设置非对角线元素填充值。 4. 新增参数name |
#26414 |
paddle.fluid.layers.flatten(x, axis=1, name=None) | paddle.flatten(x, start_axis=0, stop_axis=-1, name=None) | 在paddle.tensor.manipulation.py中新增flatten api (flatten(x, start_axis=0, stop_axis=-1, name=None)),支持对tensor任意维度区间内的展平操作。 | #25393 |
paddle.fluid.layers.flip(input, dims, name=None) | paddle.flip(x, axis, name=None) | api 由flip(input, dims, name=None) 改成?flip(x, axis, name=None) | #25312 |
paddle.fluid.layers.roll(input, shifts, dims=None) | paddle.roll(x, shifts, axes=None, name=None) | api 由roll(input, shifts, dims=None) 改成?roll(x, shifts, axis=None, name=None) | #25321 |
paddle.fluid.layers.trace(input, offset=0, dim1=0, dim2=1) | paddle.trace(x, diagonal=0, start_axis=0, stop_axis=1, name=None) | api 由 trace(input, offset=0, dim1=0, dim2=1, name=None) 改成?trace(x, offset=0, axis1=0, axis2=1, name=None) | #25397 |
paddle.fluid.layers.addmm(input, x, y, alpha=1.0, beta=1.0, name=None) | paddle.addmm(x, mat1, mat2, beta=1.0, alpha=1.0, name=None) | api 由 addmm(input, x, y, alpha=1.0, beta=1.0, name=None) 改成?addmm(input, x, y, beta=1.0, alpha=1.0, name=None) | #25529 |
paddle.fluid.layers.dot(x, y, name=None) | paddle.dot(x, y, name=None) | 完善实例代码 | #25250 |
paddle.fluid.layers.matmul(x, y, transpose_x=False, transpose_y=False, alpha=1.0, name=None) |
paddle.matmul(x, y, transpose_x=False, transpose_y=False, alpha=1.0, name=None) |
新增,去除alpha参数,增加完整的broadcast功能,行为与numpy.matmul完全一致。 | #26411 |
paddle.fluid.dygraph.Conv2D(num_channels, num_filters, filter_size, stride=1, padding=0, dilationgroups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, dtype='float32') | paddle.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', weight_attr=None, bias_attr=None, data_format='NCHW', name=None) paddle.nn.Conv2d.forward(x) |
1. api名称从Conv2D改为Conv2d 2. 去掉了use_cudnn参数 3. parame_attr参数名称改为weight_attr 4. 添加padding_mode参数" |
#26491 |
paddle.fluid.dygraph.Conv3D(num_channels, num_filters, filter_size, stride=1, padding=0, dilationgroups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, dtype="float32") | paddle.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', weight_attr=None, bias_attr=None, data_format='NCDHW', name=None) paddle.nn.Conv3d.forward(x) |
1. api名称从Conv3D改为Conv3d 2. 去掉了use_cudnn参数 3. parame_attr参数名称改为weight_attr 4. 添加padding_mode参数 |
#26491 |
paddle.fluid.dygraph.Conv2DTranspose(num_channels, num_filters, filter_size, output_size=None,groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, dtype="float32") | paddle.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', weight_attr=None, bias_attr=None, data_format='NCHW', name=None) paddle.nn.ConvTranspose2d.forward(x, output_size=None) |
1. api名称从Conv2DTranspose改为ConvTranspose2d 2. 去掉了use_cudnn参数 3. parame_attr参数名称改为weight_attr 4. 添加output_padding参数 5. forward添加output_size参数 |
#26427 |
paddle.fluid.dygraph.Conv3DTranspose(num_channels, num_filters, filter_size, output_size=None,param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, dtype="float32") | paddle.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', weight_attr=None, bias_attr=None, data_format='NCDHW', name=None) paddle.nn.ConvTranspose3d.forward(x, output_size=None) |
1.api名称从Conv3DTranspose改为ConvTranspose3d 2.去掉了use_cudnn参数 3.parame_attr参数名称改为weight_attr 4.添加output_padding参数 5.forward添加output_size参数 |
#26427 |
paddle.fluid.layers.leaky_relu(x, alpha=0.02, name=None) | paddle.nn.LeakyReLU(negative_slope=0.01,name=None) paddle.nn.LeakyReLU.forward(x) |
新增,用于计算leaky_relu激活值的class | #26216 |
paddle.fluid.dygraph.PRelu(mode, input_shape=None, param_attr=None, dtype="float32") | paddle.nn.PReLU(num_parameters=1, init=0.25, weight_attr=None, name=None) paddle.nn.PReLU.forward(x) |
新增,用于计算PReLU激活值的class | #26431 |
paddle.nn.LogSoftmax(axis=None) | paddle.nn.LogSoftmax(axis=-1,name=None) paddle.nn.LogSoftmax.forward(x) |
新增,用于计算LogSoftmax激活值的class | #26088 |
paddle.fluid.dygraph.GroupNorm(channels,?groups,?epsilon=1e-05,?param_attr=None,?bias_attr=None,?act=None,?data_layout='NCHW',?dtype="float32") | paddle.nn.GroupNorm(num_groups, num_channels, epsilon=1e-05, weight_attr=None, bias_attr=None, data_format="NCHW", name=None) paddle.nn.GroupNorm.forward(x) |
1. 参数重命名:channels->num_channels; groups->num_groups,data_layout->data_format 2. 去掉act, dtype参数 3.weight_attr, bias_attr来控制affine参数,当设为false不scale/shift |
#26465 |
paddle.fluid.dygraph.LayerNorm(normalized_shape, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, dtype="float32") | paddle.nn.LayerNorm(normalized_shape, epsilon=1e-05, weight_attr=None, bias_attr=None, name=None) paddle.nn.LayerNorm.forward(x) |
新增,由InstanceNorm1d/2d/3d调用 | #26465 |
paddle.fluid.layers.RNNCell(name=None) | paddle.nn.SimpleRNNCell(input_size, hidden_size, activation="tanh", weight_ih_attr=None, weight_hh_attr=None, bias_ih_attr=None, bias_hh_attr=None, name=None) paddle.nn.SimpleRNNCell.forward(self, inputs, states=None) |
新增 paddle.nn.SimpleRNNCell | #26588 |
paddle.fluid.layers.LSTMCell(hidden_size, param_attr=None, bias_attr=None, gate_activation=None, activation=None, forget_bias=1.0, dtype="float32", name="LSTMCell") | paddle.nn.LSTMCell(input_size, hidden_size, weight_ih_attr=None, weight_hh_attr=None, bias_ih_attr=None, bias_hh_attr=None, name=None) paddle.nn.LSTMCell.forward(self, inputs, states=None) |
新增 paddle.nn.LSTMCell | #26588 |
paddle.fluid.layers.GRUCell(hidden_size, param_attr=None, bias_attr=None, gate_activation=None, activation=None, dtype="float32", name="GRUCell") | paddle.nn.GRUCell(input_size, hidden_size, weight_ih_attr=None, weight_hh_attr=None, bias_ih_attr=None, bias_hh_attr=None, name=None) paddle.nn.GRUCell.forward(self, inputs, states=None) |
新增 paddle.nn.GRUCell | #26588 |
paddle.fluid.dygraph.Linear(input_dim, output_dim, param_attr=None, bias_attr=None, act=None, dtype='float32') | paddle.nn.Linear((in_features, out_features, weight_attr=None, bias_attr=None, name=None); forward(self, x) | 改变了参数的名称,input_dim变为in_features,output_dim变为out_features,param_attr变为weight_attr,并且去除了dtpye参数 | #26480 |
paddle.fluid.dygraph.BilinearTensorProduct(input1_dim, input2_dim, output_dim, name=None, act=None, param_attr=None, bias_attr=None, dtype="float32") | paddle.nn.Bilinear(in1_features, in2_features, out_features, weight_attr=None, bias_attr=None, name=None ); forward(self, x, y) | 1. API名称由BilinearTensorProduct改为Bilinear 2. API删除了act、dtype参数,修改了部分参数的命名,与Torch保持一致 3. 重构了代码实现,实际实现在paddle.nn.functional.bilinear中 4. 添加了动态图in_dygraph_mode分支处理 5. API行为与老API一致 |
#26399 #26610 |
paddle.fluid.dygraph.Dropout(p=0.5, seed=None, dropout_implementation='downgrade_in_infer', is_test=False) | paddle.nn.Dropout(p=0.5, name=None) | 升级,paddle.nn.Dropout(p=0.5, axis=None, mode="upscale_in_train”, name=None),修改mode参数名与默认参数,添加axis参数及功能 | #26111 |
paddle.fluid.dygraph.Embedding(size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32') | paddle.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, ?sparse=False, ?weight_attr=None, name=None) paddle.nn.Embedding.forward(self, x) |
新增 paddle.nn.Embedding, 去掉is_distributed和dtype, 增加ame, 增加weight_attr参数,padding_idx增加范围检查 | #26649 |
paddle.fluid.layers.L1Loss(reduction='mean') | paddle.nn.loss.L1Loss(reduction='mean', name=None) paddle.nn.loss.L1Lossforward(self, input, label) |
1 新增name参数 2 forward方法调用nn.funcional.l1_loss |
#26040 |
paddle.fluid.dygraph.MSELoss(input, label) | paddle.nn.loss.MSELoss(reduction='mean', name=None) paddle.nn.loss.MSELoss.forward(self, input, label) |
修正中文文档错误 | https://github.com/PaddlePaddle/FluidDoc/pull/2339 |
paddle.fluid.dygraph.NLLLoss(weight=None, reduction='mean', ignore_index=-100) | paddle.nn.loss.NLLLoss(weight=None, ignore_index=-100, reduction='mean', name=None) paddle.nn.loss.NLLLoss.forward(self, input, label) |
1. 将foward实现移到nll_loss函数中 2. ignore_index 与 reduction参数互换 3. 添加动态图op调用,提升动态图下的性能 4. 添加name参数 |
#26019 |
paddle.fluid.dygraph.BCELoss(input, label, weight=None, reduction='mean') | paddle.nn.BCELoss(x, label, weight=None, reduction='mean', name=None) paddle.nn.BCELoss.forward(self, input, label) |
新增,计算二分类的交叉熵损失函数。 | #26012 |
paddle.fluid.layers.conv2d(input, num_filters, filter_size, stride=1, paddingparam_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format="NCHW") | paddle.nn.functional.conv2d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, data_format='NCHW', name=None) | 1. 去掉了use_cudnn,act,dtype等参数 2. parame_attr参数名称改为weight_attr 3. 添加padding_mode参数 |
#26491 |
paddle.fluid.layers.conv3d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format="NCDHW") | paddle.nn.functional.conv3d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, data_format='NCDHW', name=None) | 1. 去掉了use_cudnn,act,dtype等参数 2. parame_attr参数名称改为weight_attr 3. 添加padding_mode参数 |
#26491 |
paddle.fluid.layers.conv2d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCHW') | paddle.nn.functional.conv_transpose2d(x, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1, data_format='NCHW', name=None) | 1. api名称由conv2d_transpose改为conv_transpose2d 2. 去掉了use_cudnn,act,dtype等参数 3. parame_attr参数名称改为weight_attr 4. 添加output_padding参数 |
#26427 |
paddle.fluid.layers.conv3d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCDHW') | paddle.nn.functional.conv_transpose3d(x, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1, data_format='NCDHW', name=None) | 1. api名称由conv3d_transpose改为conv_transpose3d 2. 去掉了use_cudnn,act,dtype等参数 3. parame_attr参数名称改为weight_attr 4. 添加output_padding参数 |
#26427 |
paddle.fluid.layers.relu(x, name=None) | paddle.nn.functional.relu(x, name=None) | 用于计算relu激活值的function | #26304 |
paddle.fluid.layers.relu6(x, threshold=6.0, name=None) | paddle.nn.functional.relu6(x, name=None) | input参数更名为x | #26376 |
paddle.fluid.layers.elu(x, alpha=1.0, name=None) | paddle.nn.functional.elu(x, alpha=1.0, name=None) | 用于计算elu激活值的function | #26304 |
paddle.fluid.layers.selu(x, scale=None, alpha=None, name=None) | paddle.nn.functional.selu(x, name=None) | 限制scale和alpha的取值范围,防止scale < 1.0或alpha < 0时计算错误 | #26376 |
paddle.fluid.layers.leaky_relu(x, alpha=0.02, name=None) | paddle.nn.functional.leaky_relu(x, negative_slope=0.01, name=None) | 1. 用于计算leaky_relu激活值的function 2. 参数alpha修改为negative_slope 3. negative_slope默认值从0.02修改为0.01 |
#26216 |
paddle.fluid.layers.prelu(x, mode, param_attr=None, name=None) | paddle.nn.functional.prelu(x, weight, name=None) | 1. 用于计算prelu激活值的function 2. 删除参数param_attr,改为输入Tensor weight 3. 删除参数mode,从weight的维度推断mode |
#26304 |
paddle.fluid.layers.gelu(x) | paddle.nn.functional.gelu(x, name=None) | 用于计算gelu激活值的function,增加参数name | #26304 |
paddle.fluid.layers.logsigmoid(x, name=None) | paddle.nn.functional.log_sigmoid(x, name=None) | 用于计算logsigmoid激活值的function | #26304 |
paddle.fluid.layers.hard_shrink(x, threshold=None) | paddle.nn.functional.hardshrink(x, threshold=0.5, name=None) | 1. 用于计算hardshrink激活值的function 2. 参数threshold默认值从None修改为0.5 3. 修复threshold<0时的计算错误 |
#26198 |
paddle.fluid.layers.tanh_shrink(x, name=None) | paddle.nn.functional.tanhshrink(x, name=None) | tanh_shrink改名为tanhshrink | #26376 |
paddle.fluid.layers.softsign(x, name=None) | paddle.nn.functional.softsign(x, name=None) | input 更名为 x | #26376 |
paddle.fluid.layers.softplus(x, name=None) | paddle.nn.functional.softshrink(x, threshold=0.5, name=None) | 新增beta和threshold的参数,默认值分别为1和20 | #26376 |
paddle.fluid.layers.softmax(input, use_cudnn=False, name=None, axis=-1) | paddle.nn.functional.softmax(x, axis=-1, dtype=None, name=None) | 1. 用于计算softmax激活值的function 2. 新增参数dtype,用于防止计算结果移除 3. 删除参数use_cudnn。自动判断是否需要使用cudnn加速 |
#26304 |
paddle.fluid.layers.softshrink(x, alpha=None) | paddle.nn.functional.softshrink(x, threshold=0.5, name=None) | alpha参数修改为threshold | #26376 |
paddle.fluid.layers.log_softmax(input, axis=None, dtype=None, name=None) | paddle.nn.functional.log_softmax(x, axis=-1, dtype=None, name=None) | 新增,用于计算log_softmax激活值的function | #26088 |
paddle.fluid.layers.batch_norm(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, use_global_stats=False) | paddle.nn.functional.batch_norm(x, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.9, epsilon=1e-05, data_format="NCHW", name=None) | 新增,由BatchNorm1d/2d/3d调用 | #26465 |
paddle.fluid.layers.instance_norm(input, epsilon=1e-05, param_attr=None, bias_attr=None, name=None) | paddle.nn.functional.instance_norm(x, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05, data_format="NCHW", name=None) | 新增,由InstanceNorm1d/2d/3d调用 | #26465 |
paddle.fluid.layers.layer_norm(input, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, name=None) | e.nn.functional.layer_norm(x, normalized_shape, weight=None, bias=None, epsilon=1e-05, name=None) | 新增,由LayerNorm调用 | #26465 |
paddle.fluid.layers.nn.dropout(x, dropout_prob, is_test=False, seed=None, name=None, dropout_implementation='downgrade_in_infer') | paddle.nn.functional.dropout(x,p=0.5,axis=None,training=True,mode="upscale_in_train",name=None) | 升级,paddle.nn.functional.dropout(x, p=0.5, axis=None, training=True, mode="upscale_in_train”, name=None),修改mode参数名及默认参数,添加axis参数及功能 | #26111 |
paddle.fluid.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32') | paddle.nn.functional.embedding(x, weight, padding_idx=None, ?sparse=False, name=None) | 新增,paddle.nn.functional.embedding(x, weight, padding_idx=None, sparse=False, name=None), 删除is_distributed, input变更为x, is_sparse变更为sparse, padding_idx增加范围检查。 | #26649 |
paddle.fluid.input.one_hot(input, depth, allow_out_of_range=False) | paddle.nn.functional.one_hot(x, num_classes=-1, name=None) | 1. 参数名称input -> x 2. depth -> num_classes 3. 去除allow_out_of_range参数 4. 增加name=None |
#26183 #26585 |
paddle.fluid.layers.cos_sim(X, Y) | paddle.nn.functional.cosine_similarity(x, y, axis=1, eps=1e-8, name=None) | 1. 计算2个tensor的余弦相似度 2. 支持按特定维度进行计算 3. 支持自定义epsilon值 4. 支持计算维度的broadcast |
#26106 |
paddle.fluid.layers.cross_entropy(input, label, soft_label=False, ignore_index=-100) | paddle.nn.functional.cross_entropy(input, label, weight=None, ignore_index=-100, reduction='mean', name=None) | 新增 paddle.nn.functional.cross_entropy | #26478 |
paddle.fluid.layers.warpctc(input, label, blank=0, norm_by_times=False, input_length=None, label_length=None) | paddle.nn.functional.ctc_loss(input, label, input_length, target_length, blank=0, reduction='mean', zero_infinity=False, name=None) | 新增,paddle.nn.functional.ctc_loss(input, label, input_length, target_length, blank=0, reduction='mean') | #26384 |
paddle.fluid.layers.kldiv_loss(x, target, reduction='mean', name=None) | paddle.nn.functional.kl_div(input, label, reduction='mean', name=None) | 1. 名称从kldiv_loss改为kl_div 2. 参数x名称改为input 3. 参数target名称改为label |
#25977 |
paddle.fluid.layers.mse_loss(input, label) | paddle.nn.functional.mse_loss(input, label, reduction='mean', name=None) | 新增reduction reduce方式 2、增加name=None | #26089 |
paddle.fluid.layers.margin_rank_loss(label, left, right, margin=0.1, name=None) | paddle.nn.functional.margin_ranking_loss(input1, input2, label, margin=0, reduction='mean', name=None) | 1. 新增reduction reduce方式 2. 修改对应的参数列表 |
#26266/files |
paddle.fluid.layers.pixel_shuffle(x, upscale_factor) | paddle.nn.functional.pixel_shuffle(x, upscale_factor, data_format="NCHW", name=None) | 新增 data_format 参数 支持channel_last 和name | #26071 |
paddle.fluid.layers.pad(x, paddings, pad_value=0.0, name=None) | paddle.nn.functional.pad(x, pad, mode='constant', value=0, data_format="NCHW", name=None) | 1. 支持更高维度的pad(最高5维) 2. padding list顺序同torch对齐 3. 增加circular模式 4. 之前的edge模式修改为replicate模式 5. 增加class实现 |
#26106 |
paddle.fluid.layers.interpolate(input, out_shape=None, scale=None, name=None, resample='BILINEAR', actual_shape=None, align_corners=True, align_mode=1, data_format='NCHW') | paddle.nn.functional.interpolate(x, size=None, scale_factor=None, mode='nearest', align_corners=False, align_mode=0, data_format='NCHW', name=None) | 1. 参数名称input -> x 2. scale_factor 支持list/tuple 3. 当scale为小数时,对齐torch1.6.0的计算方式 |
#26520 |
paddle.fluid.layers.grid_sampler(x, grid, name=None) | paddle.nn.functional.grid_sample(x, grid, mode='bilinear', padding_mode='zeros', align_corners=None, name=None) | 1. 名称由grid_sampler变为grid_sample 2. 新增mode、padding_mode和align_corners三个参数 |
#26576 |
paddle.fluid.layers.affine_grid(theta, out_shape, name=None) | paddle.nn.functional.affine_grid(theta, size, align_corners=None) | 1. 新增align_corners一个参数 2. 优化cuda kernel |
#26385 |
paddle.fluid.dygraph.VarBase.backward(self, backward_strategy=None, retain_graph=False) | paddle.Tensor.backward(self, retain_graph=False) | 1. Tensor.backward()接口新增retain_graph参数,用于确定反向梯度更新完成后反向梯度计算图是否需要保留,参数用法已经在文档中详细说明。 2. BackwardStrategy用于动态图和静态图的精度对齐,不需要对用户开放,开发人员想达到原有的梯度累加顺序效果可设置FLAGS_sort_sum_gradient为True。对应删除了Tensor.backward()接口中的参数backward_strategy。 |
1)删除Tensor.backward() API中backward_strategy参数及英文文档PR:#26506 2)中文文档PR:https://github.com/PaddlePaddle/FluidDoc/pull/2448 3)代码及文档进一步完善PR:#26766 |
paddle.fluid.layers.rank(input) | paddle.Tensor.dim() | 新增: Tensor.dim() 、Tensor.ndimension()、Tensor.ndim,可以输出Tensor的维度 Tensor.size(),可以输出Tensor的shape |
#26416 |
paddle.fluid.layers.expand(x, expand_times, name=None) | paddle.expand(x, shape, name=None) 别名 paddle.broadcast_to(x, shape, name=None) |
1. 输入参数由expand_times改为shape 2. 将输入扩展为给定的形状 3. 支持shape的值为-1,表示相应维度值保持不变 4. 不支持倍数的broadcast行为" |
#26290 |
paddle.fluid.layers.expand_as(x, target_tensor, name=None) | paddle.tensor.expand_as(x, y, name=None) | 1. 输入参数由target_tensor改为y 2. 不支持倍数的broadcast行为 |
#26290 |
paddle.scatter(input, index, updates, name=None, overwrite=True) | paddle.scatter(x, index, updates, overwrite=True, name=None) | 完善实例代码 | #26248 |
paddle.imperative.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False, no_grad_vars=None, backward_strategy=None) | paddle.grad(outputs,inputs,grad_outputs=None,retain_graph=None,create_graph=False,only_inputs=True,allow_unused=False,) | 1. paddle.imperative.grad更名为paddle.grad 2. paddle.grad示例更新为API 2.0版本 3. BackwardStrategy用于动态图和静态图的精度对齐,不需要对用户开放,开发人员想达到原有的梯度累加顺序效果可设置FLAGS_sort_sum_gradient为True。对应删除了paddle.grad()接口中的参数backward_strategy。 |
1)英文文档PR:#26498 2)中文文档PR:https://github.com/PaddlePaddle/FluidDoc/pull/2444 3)删除grad()接口中的backward_strategy参数PR:#26506 |
paddle.fluid.layers.Normal(loc, scale) | paddle.distribution.Normal(loc, scale, name=None) | 1. 将原有fluid下的Normal API迁移至paddle.distribution目录。 2. Normal类添加probs方法,表示概率密度函数 3. 修复Normal类中,sample采样的bug和log_prob方法中隐藏的dtype bug 4. 为Normal类和所有方法添加name属性 |
#26355 |
paddle.fluid.layers.Uniform(low, high) | paddle.distribution.Uniform(low, high, name=None) | 1. 将原fluid下的Uniform API迁移至paddle.distribution目录。 2. Uniform类添加probs方法,表示概率密度函数 3. 修复Uniform类中,sample采样的bug和log_prob方法中隐藏的dtype bug 4. 为Uniform类和所有方法添加name属性 5. 将原fluid下的uniform_random API迁移至paddle.tensor.random下,并改名为uniform 6. uniform默认从paddle.get_default_dtype获取数据类型 |
#26355, #26347" |
paddle.optimizer.Optimizer.set_dict() | paddle.optimizer.Optimizer.set_state_dict(state_dict) | 1. 优化器中的set_dict方法名称变为set_state_dict 2. current_step_lr变为get_lr 3. clear_gradients变为clear_grad,clear_gradients仍保留 |
#26288 |
paddle.fluid.optimizer.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, parameter_list=None, regularization=None, grad_clip=None, name=None, lazy_mode=False) | paddle.optimizer.Adam(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, parameters=None, weight_decay=None, grad_clip=None, lazy_mode=False, name=None) | AdamOptimizer名称变为Adam,参数与方法变化与Optimizer基类相同,添加参数范围检查 | #26288 |
paddle.fluid.optimizer.AdamaxOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, parameter_list=None, regularization=None, grad_clip=None, name=None) | paddle.optimizer.Adamax(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, parameters=None, weight_decay=None, grad_clip=None, name=None) | AdamaxOptimizer名称变为Adamax,参数与方法变化与Optimizer基类相同,添加参数范围检查 | #26288 |
paddle.fluid.optimizer.RMSPropOptimizer(learning_rate, rho=0.95, epsilon=1e-06, momentum=0.0, centered=False, parameter_list=None, regularization=None, grad_clip=None, name=None) | paddle.optimizer.RMSProp(learning_rate, rho=0.95, epsilon=1e-06, momentum=0.0, centered=False, parameters=None, weight_decay=None, grad_clip=None, name=None) | RMSPropOptimzier名称变为RMSProp,参数与方法变化与Optimizer基类相同,添加参数范围检查 | #26288 |
paddle.fluid.optimizer.SGDOptimizer(learning_rate, parameter_list=None, regularization=None, grad_clip=None, name=None) | paddle.optimizer.SGD(learning_rate, parameters=None, weight_decay=None, grad_clip=None, name=None) paddle.optimizer.Momentum(learning_rate, momentum, parameters=None, nesterov=False, weight_decay=None, grad_clip=None, name=None) |
1. SGDOptimizer名称变为 SGD,参数与方法变化与Optimizer基类相同,添加参数范围检查? 2. MomentumOptimizer名称变为 Momentum,参数与方法变化与Optimizer基类相同,添加参数范围检查 |
#26590 |
paddle.fluid.dygraph.learning_rate_scheduler.LambdaDecay | paddle.optimizer.lr_scheduler.LambdaLR(learning_rate, lr_lambda, last_epoch=-1, verbose=False) | 1. 新增API,API名称paddle.optimizer.LambdaLR 2. 参数lr_lambda通过lambda函数调整学习率,last_epoch设定训练起始epoch,verbose打印学习率变化 |
#26550 |
paddle.fluid.dygraph.learning_rate_scheduler.StepDecay | paddle.optimizer.lr_scheduler.StepLR(learning_rate, step_size, gamma=0.1, last_epoch=-1, verbose=False) | 1. 完善API,新API名称paddle.optimizer.StepLR 2. 参数decay_rate修改为gamma 3. 新增参数:last_epoch设定训练起始epoch,verbose打印学习率变化 |
#26550 |
paddle.fluid.dygraph.learning_rate_scheduler.MultiStepDecay | paddle.optimizer.lr_scheduler.MultiStepLR(learning_rate, milestones, gamma=0.1, last_epoch=-1, verbose=False | 1. 完善API,新API名称paddle.optimizer.MultiStepLR 2. 参数decay_rate修改为gamma 3. 新增参数:last_epoch设定训练起始epoch,verbose打印学习率变化 |
#26550 |
paddle.fluid.dygraph.learning_rate_scheduler.CosineDecay | paddle.optimizer.lr_scheduler.CosineAnnealingLR(learning_rate, T_max, eta_min=0, last_epoch=-1, verbose=False) | 1. 新增API,新API名称paddle.optimizer.CosineAnnealingLR 2. 支持余弦退火的学习率调度方法 3. 新增参数:T_max周期,eta_min最新学习率,last_epoch设定训练起始epoch,verbose打印学习率变化 |
#26550 |
paddle.fluid.dygraph.learning_rate_scheduler.ReduceLROnPlateau | paddle.optimizer.lr_scheduler.ReduceLROnPlateau(learning_rate, mode='min', factor=0.1, patience=10, threshold=1e-4, threshold_mode='rel', cooldown=0, min_lr=0, epsilon=1e-8, verbose=False) | 1. 新增API,API名称paddle.optimizer.ReduceLROnPlateau 2. 支持自适应metric的学习率调度方式,可在metric不再下降时降低学习率,metric一般为loss |
#26550 |
paddle.io.DataLoader(dataset, feed_list=None, places=None, return_list=False, batch_sampler=None, batch_size=1, shuffle=False, drop_last=False, collate_fn=None, num_workers=0, use_buffer_reader=True, use_shared_memory=True, timeout=0, worker_init_fn=None) | paddle.io.DataLoader(dataset, feed_list=None, places=None, return_list=False, batch_sampler=None, batch_size=1, shuffle=False, drop_last=False, collate_fn=None, num_workers=0, use_buffer_reader=True, use_shared_memory=True, timeout=0, worker_init_fn=None) | paddle.io.DataLoader支持对流式数据集IterableDataset多进程并发加速 | #25558 |