Should a function return null
?
E.g.
function test()
{
return null; // vs return;
}
Is the latter considered bad practice or doesn't it matter?
PS
Whether it is bad practice shouldn't be subjective IMHO.
Should a function return null
?
E.g.
function test()
{
return null; // vs return;
}
Is the latter considered bad practice or doesn't it matter?
PS
Whether it is bad practice shouldn't be subjective IMHO.
You should point to your vendor/autoload.php
at Settings | PHP | PHPUnit
when using PHPUnit via Composer.
This blog post has all the details (with pictures) to successfully configure IDE for such scenario: http://confluence.jetbrains.com/display/PhpStorm/PHPUnit+Installation+via+Composer+in+PhpStorm
Related usability ticket: http://youtrack.jetbrains.com/issue/WI-18388
P.S. The WI-18388 ticket is already fixed in v8.0
On Mac OS X environment variables available in Terminal and for the normal applications can be different, check the related question for the solution how to make them similar.
Note that this solution will not work on Mountain Lion (10.8).
If you are always expecting to find a value then throw the exception if it is missing. The exception would mean that there was a problem.
If the value can be missing or present and both are valid for the application logic then return a null.
More important: What do you do other places in the code? Consistency is important.
Spark >= 3.0:
SPARK-24561 - User-defined window functions with pandas udf (bounded window) is a a work in progress. Please follow the related JIRA for details.
Spark >= 2.4:
SPARK-22239 - User-defined window functions with pandas udf (unbounded window) introduced support for Pandas based window functions with unbounded windows. General structure is
return_type: DataType
@pandas_udf(return_type, PandasUDFType.GROUPED_AGG)
def f(v):
return ...
w = (Window
.partitionBy(grouping_column)
.rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing))
df.withColumn('foo', f('bar').over(w))
Please see the doctests and the unit tests for detailed examples.
Spark < 2.4
You cannot. Window functions require UserDefinedAggregateFunction
or equivalent object, not UserDefinedFunction
, and it is not possible to define one in PySpark.
However, in PySpark 2.3 or later, you can define vectorized pandas_udf
, which can be applied on grouped data. You can find a working example Applying UDFs on GroupedData in PySpark (with functioning python example). While Pandas don't provide direct equivalent of window functions, there are expressive enough to implement any window-like logic, especially with pandas.DataFrame.rolling
. Furthermore function used with GroupedData.apply
can return arbitrary number of rows.
You can also call Scala UDAF from PySpark Spark: How to map Python with Scala or Java User Defined Functions?.
If you don't return anything, just use
return;
or omit it at all at the end of the function.If your function is usually returns something but doesn't for some reason,
return null;
is the way to go.That's similar to how you do it e.g. in C: If your function doesn't return things, it's
void
, otherwise it often return either a valid pointer or NULL.